00:00:00.001 Started by upstream project "autotest-per-patch" build number 131823 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.121 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.122 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.171 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.232 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.243 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.254 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:07.254 > git config core.sparsecheckout # timeout=10 00:00:07.263 > git read-tree -mu HEAD # timeout=10 00:00:07.278 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:07.301 Commit message: "packer: Fix typo in a package name" 00:00:07.301 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:07.386 [Pipeline] Start of Pipeline 00:00:07.401 [Pipeline] library 00:00:07.403 Loading library shm_lib@master 00:00:07.403 Library shm_lib@master is cached. Copying from home. 00:00:07.448 [Pipeline] node 00:00:07.459 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.461 [Pipeline] { 00:00:07.472 [Pipeline] catchError 00:00:07.474 [Pipeline] { 00:00:07.489 [Pipeline] wrap 00:00:07.499 [Pipeline] { 00:00:07.508 [Pipeline] stage 00:00:07.511 [Pipeline] { (Prologue) 00:00:07.529 [Pipeline] echo 00:00:07.531 Node: VM-host-SM38 00:00:07.539 [Pipeline] cleanWs 00:00:07.549 [WS-CLEANUP] Deleting project workspace... 00:00:07.549 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.556 [WS-CLEANUP] done 00:00:07.764 [Pipeline] setCustomBuildProperty 00:00:07.847 [Pipeline] httpRequest 00:00:08.224 [Pipeline] echo 00:00:08.226 Sorcerer 10.211.164.101 is alive 00:00:08.233 [Pipeline] retry 00:00:08.235 [Pipeline] { 00:00:08.246 [Pipeline] httpRequest 00:00:08.250 HttpMethod: GET 00:00:08.251 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.251 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.264 Response Code: HTTP/1.1 200 OK 00:00:08.265 Success: Status code 200 is in the accepted range: 200,404 00:00:08.266 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:10.819 [Pipeline] } 00:00:10.837 [Pipeline] // retry 00:00:10.844 [Pipeline] sh 00:00:11.135 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:11.154 [Pipeline] httpRequest 00:00:11.523 [Pipeline] echo 00:00:11.525 Sorcerer 10.211.164.101 is alive 00:00:11.534 [Pipeline] retry 00:00:11.536 [Pipeline] { 00:00:11.551 [Pipeline] httpRequest 00:00:11.557 HttpMethod: GET 00:00:11.558 URL: http://10.211.164.101/packages/spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:00:11.558 Sending request to url: http://10.211.164.101/packages/spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:00:11.572 Response Code: HTTP/1.1 200 OK 00:00:11.572 Success: Status code 200 is in the accepted range: 200,404 00:00:11.573 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:01:14.421 [Pipeline] } 00:01:14.438 [Pipeline] // retry 00:01:14.445 [Pipeline] sh 00:01:14.732 + tar --no-same-owner -xf spdk_e83d2213a131d4efb80824eac72f5f2d867e5b35.tar.gz 00:01:18.167 [Pipeline] sh 00:01:18.455 + git -C spdk log --oneline -n5 00:01:18.455 e83d2213a bdev: Add spdk_bdev_io_to_ctx 00:01:18.455 cab1decc1 thread: add NUMA node support to spdk_iobuf_put() 00:01:18.455 40c9acf6d env: add spdk_mem_get_numa_id 00:01:18.455 0f99ab2fa thread: allocate iobuf memory based on numa_id 00:01:18.455 2ef611c19 thread: update all iobuf non-get/put functions for multiple NUMA nodes 00:01:18.476 [Pipeline] writeFile 00:01:18.491 [Pipeline] sh 00:01:18.779 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:18.793 [Pipeline] sh 00:01:19.082 + cat autorun-spdk.conf 00:01:19.082 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.082 SPDK_TEST_NVME=1 00:01:19.082 SPDK_TEST_FTL=1 00:01:19.082 SPDK_TEST_ISAL=1 00:01:19.082 SPDK_RUN_ASAN=1 00:01:19.082 SPDK_RUN_UBSAN=1 00:01:19.082 SPDK_TEST_XNVME=1 00:01:19.082 SPDK_TEST_NVME_FDP=1 00:01:19.082 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.091 RUN_NIGHTLY=0 00:01:19.093 [Pipeline] } 00:01:19.107 [Pipeline] // stage 00:01:19.123 [Pipeline] stage 00:01:19.125 [Pipeline] { (Run VM) 00:01:19.139 [Pipeline] sh 00:01:19.425 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.425 + echo 'Start stage prepare_nvme.sh' 00:01:19.425 Start stage prepare_nvme.sh 00:01:19.425 + [[ -n 2 ]] 00:01:19.425 + disk_prefix=ex2 00:01:19.425 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:19.425 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:19.425 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:19.425 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.425 ++ SPDK_TEST_NVME=1 00:01:19.425 ++ SPDK_TEST_FTL=1 00:01:19.425 ++ SPDK_TEST_ISAL=1 00:01:19.425 ++ SPDK_RUN_ASAN=1 00:01:19.425 ++ SPDK_RUN_UBSAN=1 00:01:19.425 ++ SPDK_TEST_XNVME=1 00:01:19.425 ++ SPDK_TEST_NVME_FDP=1 00:01:19.425 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.425 ++ RUN_NIGHTLY=0 00:01:19.425 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:19.425 + nvme_files=() 00:01:19.425 + declare -A nvme_files 00:01:19.425 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.425 + nvme_files['nvme.img']=5G 00:01:19.425 + nvme_files['nvme-cmb.img']=5G 00:01:19.425 + nvme_files['nvme-multi0.img']=4G 00:01:19.425 + nvme_files['nvme-multi1.img']=4G 00:01:19.425 + nvme_files['nvme-multi2.img']=4G 00:01:19.425 + nvme_files['nvme-openstack.img']=8G 00:01:19.425 + nvme_files['nvme-zns.img']=5G 00:01:19.425 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.425 + (( SPDK_TEST_FTL == 1 )) 00:01:19.425 + nvme_files["nvme-ftl.img"]=6G 00:01:19.425 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.425 + nvme_files["nvme-fdp.img"]=1G 00:01:19.425 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.425 + for nvme in "${!nvme_files[@]}" 00:01:19.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:19.425 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.425 + for nvme in "${!nvme_files[@]}" 00:01:19.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:20.369 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.369 + for nvme in "${!nvme_files[@]}" 00:01:20.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:20.369 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.369 + for nvme in "${!nvme_files[@]}" 00:01:20.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:20.369 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.369 + for nvme in "${!nvme_files[@]}" 00:01:20.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:20.369 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.369 + for nvme in "${!nvme_files[@]}" 00:01:20.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:20.631 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.631 + for nvme in "${!nvme_files[@]}" 00:01:20.631 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:20.891 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.891 + for nvme in "${!nvme_files[@]}" 00:01:20.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:20.891 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:20.891 + for nvme in "${!nvme_files[@]}" 00:01:20.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:21.463 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.463 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:21.463 + echo 'End stage prepare_nvme.sh' 00:01:21.463 End stage prepare_nvme.sh 00:01:21.475 [Pipeline] sh 00:01:21.758 + DISTRO=fedora39 00:01:21.758 + CPUS=10 00:01:21.758 + RAM=12288 00:01:21.758 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.758 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:21.758 00:01:21.758 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.758 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.758 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.758 HELP=0 00:01:21.758 DRY_RUN=0 00:01:21.758 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:21.758 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.758 NVME_AUTO_CREATE=0 00:01:21.758 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:21.758 NVME_CMB=,,,, 00:01:21.758 NVME_PMR=,,,, 00:01:21.758 NVME_ZNS=,,,, 00:01:21.758 NVME_MS=true,,,, 00:01:21.758 NVME_FDP=,,,on, 00:01:21.758 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.758 SPDK_VAGRANT_VMCPU=10 00:01:21.758 SPDK_VAGRANT_VMRAM=12288 00:01:21.758 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.758 SPDK_VAGRANT_HTTP_PROXY= 00:01:21.758 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.758 SPDK_OPENSTACK_NETWORK=0 00:01:21.758 VAGRANT_PACKAGE_BOX=0 00:01:21.758 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.758 FORCE_DISTRO=true 00:01:21.758 VAGRANT_BOX_VERSION= 00:01:21.758 EXTRA_VAGRANTFILES= 00:01:21.758 NIC_MODEL=e1000 00:01:21.758 00:01:21.758 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:21.758 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:24.341 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.602 ==> default: Creating image (snapshot of base box volume). 00:01:24.865 ==> default: Creating domain with the following settings... 00:01:24.865 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729878162_eaf1a0946ed55963ba7f 00:01:24.865 ==> default: -- Domain type: kvm 00:01:24.865 ==> default: -- Cpus: 10 00:01:24.865 ==> default: -- Feature: acpi 00:01:24.865 ==> default: -- Feature: apic 00:01:24.865 ==> default: -- Feature: pae 00:01:24.865 ==> default: -- Memory: 12288M 00:01:24.865 ==> default: -- Memory Backing: hugepages: 00:01:24.865 ==> default: -- Management MAC: 00:01:24.865 ==> default: -- Loader: 00:01:24.865 ==> default: -- Nvram: 00:01:24.865 ==> default: -- Base box: spdk/fedora39 00:01:24.865 ==> default: -- Storage pool: default 00:01:24.865 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729878162_eaf1a0946ed55963ba7f.img (20G) 00:01:24.865 ==> default: -- Volume Cache: default 00:01:24.865 ==> default: -- Kernel: 00:01:24.865 ==> default: -- Initrd: 00:01:24.865 ==> default: -- Graphics Type: vnc 00:01:24.865 ==> default: -- Graphics Port: -1 00:01:24.865 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.865 ==> default: -- Graphics Password: Not defined 00:01:24.865 ==> default: -- Video Type: cirrus 00:01:24.865 ==> default: -- Video VRAM: 9216 00:01:24.865 ==> default: -- Sound Type: 00:01:24.865 ==> default: -- Keymap: en-us 00:01:24.865 ==> default: -- TPM Path: 00:01:24.865 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.865 ==> default: -- Command line args: 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.865 ==> default: -> value=-drive, 00:01:24.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.865 ==> default: -> value=-drive, 00:01:24.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:24.865 ==> default: -> value=-drive, 00:01:24.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.865 ==> default: -> value=-drive, 00:01:24.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.865 ==> default: -> value=-drive, 00:01:24.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:24.865 ==> default: -> value=-drive, 00:01:24.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.865 ==> default: -> value=-device, 00:01:24.865 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.126 ==> default: Creating shared folders metadata... 00:01:25.126 ==> default: Starting domain. 00:01:27.671 ==> default: Waiting for domain to get an IP address... 00:01:45.835 ==> default: Waiting for SSH to become available... 00:01:45.835 ==> default: Configuring and enabling network interfaces... 00:01:49.162 default: SSH address: 192.168.121.91:22 00:01:49.162 default: SSH username: vagrant 00:01:49.162 default: SSH auth method: private key 00:01:51.078 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:59.223 ==> default: Mounting SSHFS shared folder... 00:02:01.137 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:01.137 ==> default: Checking Mount.. 00:02:02.079 ==> default: Folder Successfully Mounted! 00:02:02.079 00:02:02.079 SUCCESS! 00:02:02.079 00:02:02.079 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:02.079 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.079 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:02.079 00:02:02.089 [Pipeline] } 00:02:02.102 [Pipeline] // stage 00:02:02.109 [Pipeline] dir 00:02:02.109 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:02.111 [Pipeline] { 00:02:02.122 [Pipeline] catchError 00:02:02.123 [Pipeline] { 00:02:02.135 [Pipeline] sh 00:02:02.419 + vagrant ssh-config --host vagrant 00:02:02.419 + sed -ne '/^Host/,$p' 00:02:02.419 + tee ssh_conf 00:02:05.717 Host vagrant 00:02:05.717 HostName 192.168.121.91 00:02:05.717 User vagrant 00:02:05.717 Port 22 00:02:05.717 UserKnownHostsFile /dev/null 00:02:05.717 StrictHostKeyChecking no 00:02:05.717 PasswordAuthentication no 00:02:05.717 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.717 IdentitiesOnly yes 00:02:05.717 LogLevel FATAL 00:02:05.717 ForwardAgent yes 00:02:05.717 ForwardX11 yes 00:02:05.717 00:02:05.732 [Pipeline] withEnv 00:02:05.734 [Pipeline] { 00:02:05.747 [Pipeline] sh 00:02:06.050 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:06.050 source /etc/os-release 00:02:06.050 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.050 # Minimal, systemd-like check. 00:02:06.050 if [[ -e /.dockerenv ]]; then 00:02:06.050 # Clear garbage from the node'\''s name: 00:02:06.050 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.050 # $HOSTNAME is the actual container id 00:02:06.050 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.050 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:06.050 # We can assume this is a mount from a host where container is running, 00:02:06.050 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.050 container="$(< /etc/hostname) ($agent)" 00:02:06.050 else 00:02:06.050 # Fallback 00:02:06.050 container=$agent 00:02:06.050 fi 00:02:06.050 fi 00:02:06.050 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.050 ' 00:02:06.083 [Pipeline] } 00:02:06.096 [Pipeline] // withEnv 00:02:06.104 [Pipeline] setCustomBuildProperty 00:02:06.115 [Pipeline] stage 00:02:06.117 [Pipeline] { (Tests) 00:02:06.132 [Pipeline] sh 00:02:06.418 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.694 [Pipeline] sh 00:02:06.980 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:07.257 [Pipeline] timeout 00:02:07.258 Timeout set to expire in 50 min 00:02:07.260 [Pipeline] { 00:02:07.275 [Pipeline] sh 00:02:07.562 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:08.130 HEAD is now at e83d2213a bdev: Add spdk_bdev_io_to_ctx 00:02:08.141 [Pipeline] sh 00:02:08.419 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:08.688 [Pipeline] sh 00:02:08.965 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.977 [Pipeline] sh 00:02:09.250 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:09.250 ++ readlink -f spdk_repo 00:02:09.250 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.250 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.250 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.250 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.250 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.250 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.250 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.250 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:09.250 + cd /home/vagrant/spdk_repo 00:02:09.250 + source /etc/os-release 00:02:09.250 ++ NAME='Fedora Linux' 00:02:09.250 ++ VERSION='39 (Cloud Edition)' 00:02:09.250 ++ ID=fedora 00:02:09.250 ++ VERSION_ID=39 00:02:09.250 ++ VERSION_CODENAME= 00:02:09.250 ++ PLATFORM_ID=platform:f39 00:02:09.250 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.250 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.250 ++ LOGO=fedora-logo-icon 00:02:09.250 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.250 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.250 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.250 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.250 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.250 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.250 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.250 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.250 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.250 ++ SUPPORT_END=2024-11-12 00:02:09.250 ++ VARIANT='Cloud Edition' 00:02:09.250 ++ VARIANT_ID=cloud 00:02:09.250 + uname -a 00:02:09.250 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.250 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.816 Hugepages 00:02:09.816 node hugesize free / total 00:02:09.816 node0 1048576kB 0 / 0 00:02:09.816 node0 2048kB 0 / 0 00:02:09.816 00:02:09.816 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.816 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.074 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.075 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:10.075 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:10.075 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:10.075 + rm -f /tmp/spdk-ld-path 00:02:10.075 + source autorun-spdk.conf 00:02:10.075 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.075 ++ SPDK_TEST_NVME=1 00:02:10.075 ++ SPDK_TEST_FTL=1 00:02:10.075 ++ SPDK_TEST_ISAL=1 00:02:10.075 ++ SPDK_RUN_ASAN=1 00:02:10.075 ++ SPDK_RUN_UBSAN=1 00:02:10.075 ++ SPDK_TEST_XNVME=1 00:02:10.075 ++ SPDK_TEST_NVME_FDP=1 00:02:10.075 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.075 ++ RUN_NIGHTLY=0 00:02:10.075 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.075 + [[ -n '' ]] 00:02:10.075 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.075 + for M in /var/spdk/build-*-manifest.txt 00:02:10.075 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.075 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.075 + for M in /var/spdk/build-*-manifest.txt 00:02:10.075 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.075 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.075 + for M in /var/spdk/build-*-manifest.txt 00:02:10.075 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.075 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.075 ++ uname 00:02:10.075 + [[ Linux == \L\i\n\u\x ]] 00:02:10.075 + sudo dmesg -T 00:02:10.075 + sudo dmesg --clear 00:02:10.075 + dmesg_pid=5032 00:02:10.075 + [[ Fedora Linux == FreeBSD ]] 00:02:10.075 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.075 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.075 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.075 + sudo dmesg -Tw 00:02:10.075 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.075 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.075 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.075 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.075 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.075 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.075 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.075 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.075 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.075 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.075 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.075 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.075 Test configuration: 00:02:10.075 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.075 SPDK_TEST_NVME=1 00:02:10.075 SPDK_TEST_FTL=1 00:02:10.075 SPDK_TEST_ISAL=1 00:02:10.075 SPDK_RUN_ASAN=1 00:02:10.075 SPDK_RUN_UBSAN=1 00:02:10.075 SPDK_TEST_XNVME=1 00:02:10.075 SPDK_TEST_NVME_FDP=1 00:02:10.075 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.075 RUN_NIGHTLY=0 17:43:28 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:02:10.075 17:43:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.075 17:43:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.075 17:43:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.075 17:43:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.075 17:43:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.075 17:43:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.075 17:43:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.075 17:43:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.075 17:43:28 -- paths/export.sh@5 -- $ export PATH 00:02:10.075 17:43:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.075 17:43:28 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.075 17:43:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:10.075 17:43:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729878208.XXXXXX 00:02:10.075 17:43:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729878208.3JzHVJ 00:02:10.075 17:43:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:10.075 17:43:28 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:10.075 17:43:28 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.075 17:43:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.075 17:43:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.075 17:43:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:10.075 17:43:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:10.075 17:43:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.075 17:43:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:10.075 17:43:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:10.075 17:43:28 -- pm/common@17 -- $ local monitor 00:02:10.075 17:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.075 17:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.075 17:43:28 -- pm/common@25 -- $ sleep 1 00:02:10.075 17:43:28 -- pm/common@21 -- $ date +%s 00:02:10.075 17:43:28 -- pm/common@21 -- $ date +%s 00:02:10.075 17:43:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729878208 00:02:10.075 17:43:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729878208 00:02:10.334 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729878208_collect-cpu-load.pm.log 00:02:10.334 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729878208_collect-vmstat.pm.log 00:02:11.273 17:43:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:11.273 17:43:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.273 17:43:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.273 17:43:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.273 17:43:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.273 Fri Oct 25 05:43:29 PM UTC 2024 00:02:11.273 17:43:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.273 v25.01-pre-118-ge83d2213a 00:02:11.273 17:43:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:11.273 17:43:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:11.273 17:43:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.273 17:43:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.273 17:43:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.273 ************************************ 00:02:11.273 START TEST asan 00:02:11.273 ************************************ 00:02:11.273 using asan 00:02:11.273 17:43:29 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:11.273 00:02:11.273 real 0m0.000s 00:02:11.273 user 0m0.000s 00:02:11.273 sys 0m0.000s 00:02:11.273 17:43:29 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.273 17:43:29 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.273 ************************************ 00:02:11.273 END TEST asan 00:02:11.273 ************************************ 00:02:11.273 17:43:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.273 17:43:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.273 17:43:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.273 17:43:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.273 17:43:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.273 ************************************ 00:02:11.273 START TEST ubsan 00:02:11.273 ************************************ 00:02:11.273 using ubsan 00:02:11.273 17:43:29 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:11.273 00:02:11.273 real 0m0.000s 00:02:11.273 user 0m0.000s 00:02:11.273 sys 0m0.000s 00:02:11.273 17:43:29 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.273 17:43:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.273 ************************************ 00:02:11.273 END TEST ubsan 00:02:11.273 ************************************ 00:02:11.273 17:43:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.273 17:43:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.273 17:43:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.273 17:43:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.273 17:43:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.273 17:43:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.273 17:43:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.273 17:43:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.273 17:43:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:11.273 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.273 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.842 Using 'verbs' RDMA provider 00:02:22.467 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:32.435 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:32.435 Creating mk/config.mk...done. 00:02:32.435 Creating mk/cc.flags.mk...done. 00:02:32.435 Type 'make' to build. 00:02:32.435 17:43:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:32.435 17:43:50 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:32.435 17:43:50 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:32.435 17:43:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.435 ************************************ 00:02:32.435 START TEST make 00:02:32.435 ************************************ 00:02:32.435 17:43:50 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:32.693 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:32.693 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:32.693 meson setup builddir \ 00:02:32.693 -Dwith-libaio=enabled \ 00:02:32.693 -Dwith-liburing=enabled \ 00:02:32.693 -Dwith-libvfn=disabled \ 00:02:32.693 -Dwith-spdk=disabled \ 00:02:32.693 -Dexamples=false \ 00:02:32.693 -Dtests=false \ 00:02:32.693 -Dtools=false && \ 00:02:32.693 meson compile -C builddir && \ 00:02:32.693 cd -) 00:02:32.693 make[1]: Nothing to be done for 'all'. 00:02:34.594 The Meson build system 00:02:34.594 Version: 1.5.0 00:02:34.594 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:34.594 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:34.594 Build type: native build 00:02:34.594 Project name: xnvme 00:02:34.594 Project version: 0.7.5 00:02:34.594 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.594 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.594 Host machine cpu family: x86_64 00:02:34.594 Host machine cpu: x86_64 00:02:34.594 Message: host_machine.system: linux 00:02:34.594 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:34.594 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:34.594 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.594 Run-time dependency threads found: YES 00:02:34.594 Has header "setupapi.h" : NO 00:02:34.594 Has header "linux/blkzoned.h" : YES 00:02:34.594 Has header "linux/blkzoned.h" : YES (cached) 00:02:34.594 Has header "libaio.h" : YES 00:02:34.594 Library aio found: YES 00:02:34.594 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.594 Run-time dependency liburing found: YES 2.2 00:02:34.594 Dependency libvfn skipped: feature with-libvfn disabled 00:02:34.594 Found CMake: /usr/bin/cmake (3.27.7) 00:02:34.594 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:34.594 Subproject spdk : skipped: feature with-spdk disabled 00:02:34.594 Run-time dependency appleframeworks found: NO (tried framework) 00:02:34.594 Run-time dependency appleframeworks found: NO (tried framework) 00:02:34.594 Library rt found: YES 00:02:34.594 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:34.594 Configuring xnvme_config.h using configuration 00:02:34.594 Configuring xnvme.spec using configuration 00:02:34.594 Run-time dependency bash-completion found: YES 2.11 00:02:34.594 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:34.594 Program cp found: YES (/usr/bin/cp) 00:02:34.594 Build targets in project: 3 00:02:34.594 00:02:34.594 xnvme 0.7.5 00:02:34.594 00:02:34.594 Subprojects 00:02:34.594 spdk : NO Feature 'with-spdk' disabled 00:02:34.594 00:02:34.594 User defined options 00:02:34.594 examples : false 00:02:34.594 tests : false 00:02:34.594 tools : false 00:02:34.594 with-libaio : enabled 00:02:34.594 with-liburing: enabled 00:02:34.594 with-libvfn : disabled 00:02:34.594 with-spdk : disabled 00:02:34.594 00:02:34.594 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.852 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:34.852 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:34.852 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:34.852 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.852 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:34.852 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:34.852 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:35.110 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:35.110 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:35.110 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:35.110 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:35.110 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:35.110 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:35.110 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:35.110 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:35.110 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:35.110 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:35.110 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:35.110 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:35.110 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:35.110 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:35.110 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:35.110 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:35.110 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:35.110 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:35.110 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:35.110 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:35.110 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:35.110 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:35.110 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:35.110 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:35.110 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:35.110 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:35.110 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:35.110 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:35.369 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:35.369 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:35.369 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:35.369 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:35.369 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:35.369 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:35.369 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:35.369 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:35.369 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:35.369 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:35.369 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:35.369 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:35.369 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:35.369 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:35.369 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:35.369 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:35.369 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:35.369 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:35.369 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:35.369 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:35.369 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:35.369 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:35.369 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:35.369 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:35.369 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:35.369 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:35.369 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:35.369 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:35.369 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:35.369 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:35.369 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:35.369 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:35.628 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:35.628 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:35.628 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:35.628 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:35.628 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:35.628 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:35.628 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:35.886 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:35.886 [75/76] Linking static target lib/libxnvme.a 00:02:36.144 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:36.144 INFO: autodetecting backend as ninja 00:02:36.144 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:36.144 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:41.410 The Meson build system 00:02:41.410 Version: 1.5.0 00:02:41.410 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.410 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.410 Build type: native build 00:02:41.410 Program cat found: YES (/usr/bin/cat) 00:02:41.410 Project name: DPDK 00:02:41.410 Project version: 24.03.0 00:02:41.410 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.410 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.410 Host machine cpu family: x86_64 00:02:41.410 Host machine cpu: x86_64 00:02:41.410 Message: ## Building in Developer Mode ## 00:02:41.410 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.410 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.410 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.410 Program python3 found: YES (/usr/bin/python3) 00:02:41.410 Program cat found: YES (/usr/bin/cat) 00:02:41.410 Compiler for C supports arguments -march=native: YES 00:02:41.410 Checking for size of "void *" : 8 00:02:41.410 Checking for size of "void *" : 8 (cached) 00:02:41.410 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:41.410 Library m found: YES 00:02:41.410 Library numa found: YES 00:02:41.410 Has header "numaif.h" : YES 00:02:41.410 Library fdt found: NO 00:02:41.410 Library execinfo found: NO 00:02:41.410 Has header "execinfo.h" : YES 00:02:41.410 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.410 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.410 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.410 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.410 Run-time dependency openssl found: YES 3.1.1 00:02:41.410 Run-time dependency libpcap found: YES 1.10.4 00:02:41.410 Has header "pcap.h" with dependency libpcap: YES 00:02:41.410 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.410 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.410 Compiler for C supports arguments -Wformat: YES 00:02:41.410 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.410 Compiler for C supports arguments -Wformat-security: NO 00:02:41.410 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.410 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.410 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.410 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.410 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.410 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.410 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.410 Compiler for C supports arguments -Wundef: YES 00:02:41.410 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.410 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.410 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.410 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.410 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.410 Program objdump found: YES (/usr/bin/objdump) 00:02:41.410 Compiler for C supports arguments -mavx512f: YES 00:02:41.411 Checking if "AVX512 checking" compiles: YES 00:02:41.411 Fetching value of define "__SSE4_2__" : 1 00:02:41.411 Fetching value of define "__AES__" : 1 00:02:41.411 Fetching value of define "__AVX__" : 1 00:02:41.411 Fetching value of define "__AVX2__" : 1 00:02:41.411 Fetching value of define "__AVX512BW__" : 1 00:02:41.411 Fetching value of define "__AVX512CD__" : 1 00:02:41.411 Fetching value of define "__AVX512DQ__" : 1 00:02:41.411 Fetching value of define "__AVX512F__" : 1 00:02:41.411 Fetching value of define "__AVX512VL__" : 1 00:02:41.411 Fetching value of define "__PCLMUL__" : 1 00:02:41.411 Fetching value of define "__RDRND__" : 1 00:02:41.411 Fetching value of define "__RDSEED__" : 1 00:02:41.411 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:41.411 Fetching value of define "__znver1__" : (undefined) 00:02:41.411 Fetching value of define "__znver2__" : (undefined) 00:02:41.411 Fetching value of define "__znver3__" : (undefined) 00:02:41.411 Fetching value of define "__znver4__" : (undefined) 00:02:41.411 Library asan found: YES 00:02:41.411 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.411 Message: lib/log: Defining dependency "log" 00:02:41.411 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.411 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.411 Library rt found: YES 00:02:41.411 Checking for function "getentropy" : NO 00:02:41.411 Message: lib/eal: Defining dependency "eal" 00:02:41.411 Message: lib/ring: Defining dependency "ring" 00:02:41.411 Message: lib/rcu: Defining dependency "rcu" 00:02:41.411 Message: lib/mempool: Defining dependency "mempool" 00:02:41.411 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.411 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.411 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.411 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.411 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:41.411 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:41.411 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:41.411 Compiler for C supports arguments -mpclmul: YES 00:02:41.411 Compiler for C supports arguments -maes: YES 00:02:41.411 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.411 Compiler for C supports arguments -mavx512bw: YES 00:02:41.411 Compiler for C supports arguments -mavx512dq: YES 00:02:41.411 Compiler for C supports arguments -mavx512vl: YES 00:02:41.411 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.411 Compiler for C supports arguments -mavx2: YES 00:02:41.411 Compiler for C supports arguments -mavx: YES 00:02:41.411 Message: lib/net: Defining dependency "net" 00:02:41.411 Message: lib/meter: Defining dependency "meter" 00:02:41.411 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.411 Message: lib/pci: Defining dependency "pci" 00:02:41.411 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.411 Message: lib/hash: Defining dependency "hash" 00:02:41.411 Message: lib/timer: Defining dependency "timer" 00:02:41.411 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.411 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.411 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.411 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.411 Message: lib/power: Defining dependency "power" 00:02:41.411 Message: lib/reorder: Defining dependency "reorder" 00:02:41.411 Message: lib/security: Defining dependency "security" 00:02:41.411 Has header "linux/userfaultfd.h" : YES 00:02:41.411 Has header "linux/vduse.h" : YES 00:02:41.411 Message: lib/vhost: Defining dependency "vhost" 00:02:41.411 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.411 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.411 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.411 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.411 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.411 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.411 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.411 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.411 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.411 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.411 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.411 Configuring doxy-api-html.conf using configuration 00:02:41.411 Configuring doxy-api-man.conf using configuration 00:02:41.411 Program mandb found: YES (/usr/bin/mandb) 00:02:41.411 Program sphinx-build found: NO 00:02:41.411 Configuring rte_build_config.h using configuration 00:02:41.411 Message: 00:02:41.411 ================= 00:02:41.411 Applications Enabled 00:02:41.411 ================= 00:02:41.411 00:02:41.411 apps: 00:02:41.411 00:02:41.411 00:02:41.411 Message: 00:02:41.411 ================= 00:02:41.411 Libraries Enabled 00:02:41.411 ================= 00:02:41.411 00:02:41.411 libs: 00:02:41.411 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.411 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.411 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.411 00:02:41.411 Message: 00:02:41.411 =============== 00:02:41.411 Drivers Enabled 00:02:41.411 =============== 00:02:41.411 00:02:41.411 common: 00:02:41.411 00:02:41.411 bus: 00:02:41.411 pci, vdev, 00:02:41.411 mempool: 00:02:41.411 ring, 00:02:41.411 dma: 00:02:41.411 00:02:41.411 net: 00:02:41.411 00:02:41.411 crypto: 00:02:41.411 00:02:41.411 compress: 00:02:41.411 00:02:41.411 vdpa: 00:02:41.411 00:02:41.411 00:02:41.411 Message: 00:02:41.411 ================= 00:02:41.411 Content Skipped 00:02:41.411 ================= 00:02:41.411 00:02:41.411 apps: 00:02:41.411 dumpcap: explicitly disabled via build config 00:02:41.411 graph: explicitly disabled via build config 00:02:41.411 pdump: explicitly disabled via build config 00:02:41.411 proc-info: explicitly disabled via build config 00:02:41.411 test-acl: explicitly disabled via build config 00:02:41.411 test-bbdev: explicitly disabled via build config 00:02:41.411 test-cmdline: explicitly disabled via build config 00:02:41.411 test-compress-perf: explicitly disabled via build config 00:02:41.411 test-crypto-perf: explicitly disabled via build config 00:02:41.411 test-dma-perf: explicitly disabled via build config 00:02:41.411 test-eventdev: explicitly disabled via build config 00:02:41.411 test-fib: explicitly disabled via build config 00:02:41.411 test-flow-perf: explicitly disabled via build config 00:02:41.411 test-gpudev: explicitly disabled via build config 00:02:41.411 test-mldev: explicitly disabled via build config 00:02:41.411 test-pipeline: explicitly disabled via build config 00:02:41.411 test-pmd: explicitly disabled via build config 00:02:41.411 test-regex: explicitly disabled via build config 00:02:41.411 test-sad: explicitly disabled via build config 00:02:41.411 test-security-perf: explicitly disabled via build config 00:02:41.411 00:02:41.411 libs: 00:02:41.411 argparse: explicitly disabled via build config 00:02:41.411 metrics: explicitly disabled via build config 00:02:41.411 acl: explicitly disabled via build config 00:02:41.411 bbdev: explicitly disabled via build config 00:02:41.411 bitratestats: explicitly disabled via build config 00:02:41.411 bpf: explicitly disabled via build config 00:02:41.411 cfgfile: explicitly disabled via build config 00:02:41.411 distributor: explicitly disabled via build config 00:02:41.411 efd: explicitly disabled via build config 00:02:41.411 eventdev: explicitly disabled via build config 00:02:41.411 dispatcher: explicitly disabled via build config 00:02:41.411 gpudev: explicitly disabled via build config 00:02:41.411 gro: explicitly disabled via build config 00:02:41.411 gso: explicitly disabled via build config 00:02:41.411 ip_frag: explicitly disabled via build config 00:02:41.411 jobstats: explicitly disabled via build config 00:02:41.411 latencystats: explicitly disabled via build config 00:02:41.411 lpm: explicitly disabled via build config 00:02:41.411 member: explicitly disabled via build config 00:02:41.411 pcapng: explicitly disabled via build config 00:02:41.411 rawdev: explicitly disabled via build config 00:02:41.411 regexdev: explicitly disabled via build config 00:02:41.411 mldev: explicitly disabled via build config 00:02:41.411 rib: explicitly disabled via build config 00:02:41.411 sched: explicitly disabled via build config 00:02:41.412 stack: explicitly disabled via build config 00:02:41.412 ipsec: explicitly disabled via build config 00:02:41.412 pdcp: explicitly disabled via build config 00:02:41.412 fib: explicitly disabled via build config 00:02:41.412 port: explicitly disabled via build config 00:02:41.412 pdump: explicitly disabled via build config 00:02:41.412 table: explicitly disabled via build config 00:02:41.412 pipeline: explicitly disabled via build config 00:02:41.412 graph: explicitly disabled via build config 00:02:41.412 node: explicitly disabled via build config 00:02:41.412 00:02:41.412 drivers: 00:02:41.412 common/cpt: not in enabled drivers build config 00:02:41.412 common/dpaax: not in enabled drivers build config 00:02:41.412 common/iavf: not in enabled drivers build config 00:02:41.412 common/idpf: not in enabled drivers build config 00:02:41.412 common/ionic: not in enabled drivers build config 00:02:41.412 common/mvep: not in enabled drivers build config 00:02:41.412 common/octeontx: not in enabled drivers build config 00:02:41.412 bus/auxiliary: not in enabled drivers build config 00:02:41.412 bus/cdx: not in enabled drivers build config 00:02:41.412 bus/dpaa: not in enabled drivers build config 00:02:41.412 bus/fslmc: not in enabled drivers build config 00:02:41.412 bus/ifpga: not in enabled drivers build config 00:02:41.412 bus/platform: not in enabled drivers build config 00:02:41.412 bus/uacce: not in enabled drivers build config 00:02:41.412 bus/vmbus: not in enabled drivers build config 00:02:41.412 common/cnxk: not in enabled drivers build config 00:02:41.412 common/mlx5: not in enabled drivers build config 00:02:41.412 common/nfp: not in enabled drivers build config 00:02:41.412 common/nitrox: not in enabled drivers build config 00:02:41.412 common/qat: not in enabled drivers build config 00:02:41.412 common/sfc_efx: not in enabled drivers build config 00:02:41.412 mempool/bucket: not in enabled drivers build config 00:02:41.412 mempool/cnxk: not in enabled drivers build config 00:02:41.412 mempool/dpaa: not in enabled drivers build config 00:02:41.412 mempool/dpaa2: not in enabled drivers build config 00:02:41.412 mempool/octeontx: not in enabled drivers build config 00:02:41.412 mempool/stack: not in enabled drivers build config 00:02:41.412 dma/cnxk: not in enabled drivers build config 00:02:41.412 dma/dpaa: not in enabled drivers build config 00:02:41.412 dma/dpaa2: not in enabled drivers build config 00:02:41.412 dma/hisilicon: not in enabled drivers build config 00:02:41.412 dma/idxd: not in enabled drivers build config 00:02:41.412 dma/ioat: not in enabled drivers build config 00:02:41.412 dma/skeleton: not in enabled drivers build config 00:02:41.412 net/af_packet: not in enabled drivers build config 00:02:41.412 net/af_xdp: not in enabled drivers build config 00:02:41.412 net/ark: not in enabled drivers build config 00:02:41.412 net/atlantic: not in enabled drivers build config 00:02:41.412 net/avp: not in enabled drivers build config 00:02:41.412 net/axgbe: not in enabled drivers build config 00:02:41.412 net/bnx2x: not in enabled drivers build config 00:02:41.412 net/bnxt: not in enabled drivers build config 00:02:41.412 net/bonding: not in enabled drivers build config 00:02:41.412 net/cnxk: not in enabled drivers build config 00:02:41.412 net/cpfl: not in enabled drivers build config 00:02:41.412 net/cxgbe: not in enabled drivers build config 00:02:41.412 net/dpaa: not in enabled drivers build config 00:02:41.412 net/dpaa2: not in enabled drivers build config 00:02:41.412 net/e1000: not in enabled drivers build config 00:02:41.412 net/ena: not in enabled drivers build config 00:02:41.412 net/enetc: not in enabled drivers build config 00:02:41.412 net/enetfec: not in enabled drivers build config 00:02:41.412 net/enic: not in enabled drivers build config 00:02:41.412 net/failsafe: not in enabled drivers build config 00:02:41.412 net/fm10k: not in enabled drivers build config 00:02:41.412 net/gve: not in enabled drivers build config 00:02:41.412 net/hinic: not in enabled drivers build config 00:02:41.412 net/hns3: not in enabled drivers build config 00:02:41.412 net/i40e: not in enabled drivers build config 00:02:41.412 net/iavf: not in enabled drivers build config 00:02:41.412 net/ice: not in enabled drivers build config 00:02:41.412 net/idpf: not in enabled drivers build config 00:02:41.412 net/igc: not in enabled drivers build config 00:02:41.412 net/ionic: not in enabled drivers build config 00:02:41.412 net/ipn3ke: not in enabled drivers build config 00:02:41.412 net/ixgbe: not in enabled drivers build config 00:02:41.412 net/mana: not in enabled drivers build config 00:02:41.412 net/memif: not in enabled drivers build config 00:02:41.412 net/mlx4: not in enabled drivers build config 00:02:41.412 net/mlx5: not in enabled drivers build config 00:02:41.412 net/mvneta: not in enabled drivers build config 00:02:41.412 net/mvpp2: not in enabled drivers build config 00:02:41.412 net/netvsc: not in enabled drivers build config 00:02:41.412 net/nfb: not in enabled drivers build config 00:02:41.412 net/nfp: not in enabled drivers build config 00:02:41.412 net/ngbe: not in enabled drivers build config 00:02:41.412 net/null: not in enabled drivers build config 00:02:41.412 net/octeontx: not in enabled drivers build config 00:02:41.412 net/octeon_ep: not in enabled drivers build config 00:02:41.412 net/pcap: not in enabled drivers build config 00:02:41.412 net/pfe: not in enabled drivers build config 00:02:41.412 net/qede: not in enabled drivers build config 00:02:41.412 net/ring: not in enabled drivers build config 00:02:41.412 net/sfc: not in enabled drivers build config 00:02:41.412 net/softnic: not in enabled drivers build config 00:02:41.412 net/tap: not in enabled drivers build config 00:02:41.412 net/thunderx: not in enabled drivers build config 00:02:41.412 net/txgbe: not in enabled drivers build config 00:02:41.412 net/vdev_netvsc: not in enabled drivers build config 00:02:41.412 net/vhost: not in enabled drivers build config 00:02:41.412 net/virtio: not in enabled drivers build config 00:02:41.412 net/vmxnet3: not in enabled drivers build config 00:02:41.412 raw/*: missing internal dependency, "rawdev" 00:02:41.412 crypto/armv8: not in enabled drivers build config 00:02:41.412 crypto/bcmfs: not in enabled drivers build config 00:02:41.412 crypto/caam_jr: not in enabled drivers build config 00:02:41.412 crypto/ccp: not in enabled drivers build config 00:02:41.412 crypto/cnxk: not in enabled drivers build config 00:02:41.412 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.412 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.412 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.412 crypto/mlx5: not in enabled drivers build config 00:02:41.412 crypto/mvsam: not in enabled drivers build config 00:02:41.412 crypto/nitrox: not in enabled drivers build config 00:02:41.412 crypto/null: not in enabled drivers build config 00:02:41.412 crypto/octeontx: not in enabled drivers build config 00:02:41.412 crypto/openssl: not in enabled drivers build config 00:02:41.412 crypto/scheduler: not in enabled drivers build config 00:02:41.412 crypto/uadk: not in enabled drivers build config 00:02:41.412 crypto/virtio: not in enabled drivers build config 00:02:41.412 compress/isal: not in enabled drivers build config 00:02:41.412 compress/mlx5: not in enabled drivers build config 00:02:41.412 compress/nitrox: not in enabled drivers build config 00:02:41.412 compress/octeontx: not in enabled drivers build config 00:02:41.412 compress/zlib: not in enabled drivers build config 00:02:41.412 regex/*: missing internal dependency, "regexdev" 00:02:41.412 ml/*: missing internal dependency, "mldev" 00:02:41.412 vdpa/ifc: not in enabled drivers build config 00:02:41.412 vdpa/mlx5: not in enabled drivers build config 00:02:41.412 vdpa/nfp: not in enabled drivers build config 00:02:41.412 vdpa/sfc: not in enabled drivers build config 00:02:41.412 event/*: missing internal dependency, "eventdev" 00:02:41.412 baseband/*: missing internal dependency, "bbdev" 00:02:41.412 gpu/*: missing internal dependency, "gpudev" 00:02:41.413 00:02:41.413 00:02:41.671 Build targets in project: 84 00:02:41.671 00:02:41.671 DPDK 24.03.0 00:02:41.671 00:02:41.671 User defined options 00:02:41.671 buildtype : debug 00:02:41.671 default_library : shared 00:02:41.671 libdir : lib 00:02:41.671 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.671 b_sanitize : address 00:02:41.671 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:41.671 c_link_args : 00:02:41.671 cpu_instruction_set: native 00:02:41.671 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:41.671 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:41.671 enable_docs : false 00:02:41.671 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.671 enable_kmods : false 00:02:41.671 max_lcores : 128 00:02:41.671 tests : false 00:02:41.671 00:02:41.671 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.930 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:41.930 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.930 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.930 [3/267] Linking static target lib/librte_kvargs.a 00:02:41.930 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.930 [5/267] Linking static target lib/librte_log.a 00:02:41.930 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.189 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.189 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.189 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.448 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.448 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.448 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.448 [13/267] Linking static target lib/librte_telemetry.a 00:02:42.448 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.448 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.448 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.448 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.448 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.706 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.706 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.706 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.706 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.706 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.963 [24/267] Linking target lib/librte_log.so.24.1 00:02:42.963 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.963 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.963 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.963 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:42.963 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.963 [30/267] Linking target lib/librte_kvargs.so.24.1 00:02:42.963 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.963 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.963 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.221 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.221 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:43.221 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.221 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:43.221 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.221 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.221 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:43.221 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.480 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.480 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.480 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.480 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.480 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.480 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.480 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.738 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.738 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.738 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.738 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.738 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.738 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.997 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.997 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.997 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.997 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.997 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.997 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.997 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.997 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.256 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.256 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.256 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.256 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.514 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.514 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.514 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.514 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.514 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.514 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.514 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.514 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.514 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.514 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.774 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.774 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.774 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.774 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.774 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.032 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.032 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.032 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.032 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.032 [86/267] Linking static target lib/librte_eal.a 00:02:45.032 [87/267] Linking static target lib/librte_ring.a 00:02:45.032 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.291 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.291 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.291 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.554 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.554 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.554 [94/267] Linking static target lib/librte_rcu.a 00:02:45.554 [95/267] Linking static target lib/librte_mempool.a 00:02:45.554 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.554 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.554 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.554 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.812 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.812 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.812 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.812 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.812 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.812 [105/267] Linking static target lib/librte_meter.a 00:02:45.812 [106/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.070 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:46.071 [108/267] Linking static target lib/librte_net.a 00:02:46.329 [109/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.329 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.329 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.329 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.329 [113/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.329 [114/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.329 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.587 [116/267] Linking static target lib/librte_mbuf.a 00:02:46.587 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.587 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.587 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.587 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.846 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:46.846 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.105 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.105 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.105 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.105 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.105 [127/267] Linking static target lib/librte_pci.a 00:02:47.105 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.105 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.364 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.364 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.364 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:47.364 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.364 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.364 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.364 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.364 [137/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.364 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.364 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.364 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.364 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.364 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.364 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.364 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:47.364 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:47.623 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.623 [147/267] Linking static target lib/librte_cmdline.a 00:02:47.623 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.623 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:47.623 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.623 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.623 [152/267] Linking static target lib/librte_timer.a 00:02:47.882 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.883 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.883 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.883 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.143 [157/267] Linking static target lib/librte_ethdev.a 00:02:48.143 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.143 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.143 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.143 [161/267] Linking static target lib/librte_compressdev.a 00:02:48.143 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.143 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.407 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.407 [165/267] Linking static target lib/librte_hash.a 00:02:48.407 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.407 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.407 [168/267] Linking static target lib/librte_dmadev.a 00:02:48.407 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.407 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.666 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:48.666 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.666 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.925 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.925 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.925 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.925 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.925 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.925 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.925 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.925 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.925 [182/267] Linking static target lib/librte_cryptodev.a 00:02:49.183 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.183 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.183 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.183 [186/267] Linking static target lib/librte_power.a 00:02:49.183 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.183 [188/267] Linking static target lib/librte_reorder.a 00:02:49.441 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.441 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.441 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.441 [192/267] Linking static target lib/librte_security.a 00:02:49.699 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.699 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.956 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.956 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.214 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.214 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.214 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.214 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.472 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.472 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.472 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:50.472 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.472 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.730 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.730 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.730 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.730 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:50.730 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.730 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.987 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.987 [213/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.987 [214/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.987 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.987 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.987 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:50.987 [218/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.987 [219/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.987 [220/267] Linking static target drivers/librte_bus_vdev.a 00:02:50.987 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.987 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.987 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.987 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:50.987 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.245 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.503 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.437 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.437 [229/267] Linking target lib/librte_eal.so.24.1 00:02:52.695 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:52.695 [231/267] Linking target lib/librte_meter.so.24.1 00:02:52.695 [232/267] Linking target lib/librte_pci.so.24.1 00:02:52.695 [233/267] Linking target lib/librte_timer.so.24.1 00:02:52.695 [234/267] Linking target lib/librte_ring.so.24.1 00:02:52.696 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:52.696 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:52.696 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:52.696 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:52.696 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:52.696 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:52.696 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:52.696 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:52.696 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:52.696 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:52.954 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:52.954 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:52.954 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:52.954 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:52.954 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:52.954 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:02:52.954 [251/267] Linking target lib/librte_net.so.24.1 00:02:52.954 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:52.954 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:53.212 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.212 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.212 [256/267] Linking target lib/librte_hash.so.24.1 00:02:53.212 [257/267] Linking target lib/librte_security.so.24.1 00:02:53.212 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:53.212 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.470 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.470 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:53.470 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.728 [263/267] Linking target lib/librte_power.so.24.1 00:02:54.400 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.400 [265/267] Linking static target lib/librte_vhost.a 00:02:55.774 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.774 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:55.774 INFO: autodetecting backend as ninja 00:02:55.774 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:10.749 CC lib/ut/ut.o 00:03:10.749 CC lib/ut_mock/mock.o 00:03:10.749 CC lib/log/log.o 00:03:10.749 CC lib/log/log_deprecated.o 00:03:10.749 CC lib/log/log_flags.o 00:03:10.749 LIB libspdk_ut_mock.a 00:03:10.749 LIB libspdk_ut.a 00:03:10.749 LIB libspdk_log.a 00:03:10.749 SO libspdk_ut.so.2.0 00:03:10.749 SO libspdk_ut_mock.so.6.0 00:03:10.749 SO libspdk_log.so.7.1 00:03:10.749 SYMLINK libspdk_ut_mock.so 00:03:10.749 SYMLINK libspdk_ut.so 00:03:10.749 SYMLINK libspdk_log.so 00:03:10.749 CC lib/ioat/ioat.o 00:03:10.749 CC lib/dma/dma.o 00:03:10.749 CXX lib/trace_parser/trace.o 00:03:10.749 CC lib/util/base64.o 00:03:10.749 CC lib/util/crc16.o 00:03:10.749 CC lib/util/cpuset.o 00:03:10.749 CC lib/util/bit_array.o 00:03:10.749 CC lib/util/crc32.o 00:03:10.749 CC lib/util/crc32c.o 00:03:10.749 CC lib/vfio_user/host/vfio_user_pci.o 00:03:10.749 CC lib/util/crc32_ieee.o 00:03:10.749 CC lib/util/crc64.o 00:03:10.749 CC lib/util/dif.o 00:03:10.749 CC lib/vfio_user/host/vfio_user.o 00:03:10.749 LIB libspdk_dma.a 00:03:10.749 SO libspdk_dma.so.5.0 00:03:10.749 CC lib/util/fd.o 00:03:10.749 CC lib/util/fd_group.o 00:03:10.749 SYMLINK libspdk_dma.so 00:03:10.749 CC lib/util/file.o 00:03:10.749 CC lib/util/hexlify.o 00:03:10.749 CC lib/util/iov.o 00:03:10.749 LIB libspdk_ioat.a 00:03:10.749 SO libspdk_ioat.so.7.0 00:03:10.749 CC lib/util/math.o 00:03:10.749 CC lib/util/net.o 00:03:10.749 SYMLINK libspdk_ioat.so 00:03:10.749 CC lib/util/pipe.o 00:03:10.749 LIB libspdk_vfio_user.a 00:03:10.749 CC lib/util/strerror_tls.o 00:03:10.749 CC lib/util/string.o 00:03:10.749 SO libspdk_vfio_user.so.5.0 00:03:10.749 CC lib/util/uuid.o 00:03:10.749 SYMLINK libspdk_vfio_user.so 00:03:10.749 CC lib/util/xor.o 00:03:10.749 CC lib/util/zipf.o 00:03:10.749 CC lib/util/md5.o 00:03:10.749 LIB libspdk_util.a 00:03:10.749 SO libspdk_util.so.10.0 00:03:10.749 LIB libspdk_trace_parser.a 00:03:10.749 SYMLINK libspdk_util.so 00:03:10.750 SO libspdk_trace_parser.so.6.0 00:03:10.750 SYMLINK libspdk_trace_parser.so 00:03:10.750 CC lib/json/json_parse.o 00:03:10.750 CC lib/json/json_write.o 00:03:10.750 CC lib/rdma_provider/common.o 00:03:10.750 CC lib/idxd/idxd.o 00:03:10.750 CC lib/json/json_util.o 00:03:10.750 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:10.750 CC lib/rdma_utils/rdma_utils.o 00:03:10.750 CC lib/conf/conf.o 00:03:10.750 CC lib/vmd/vmd.o 00:03:10.750 CC lib/env_dpdk/env.o 00:03:10.750 CC lib/env_dpdk/memory.o 00:03:10.750 LIB libspdk_rdma_provider.a 00:03:10.750 SO libspdk_rdma_provider.so.6.0 00:03:10.750 LIB libspdk_conf.a 00:03:10.750 SO libspdk_conf.so.6.0 00:03:10.750 CC lib/vmd/led.o 00:03:10.750 CC lib/env_dpdk/pci.o 00:03:10.750 SYMLINK libspdk_rdma_provider.so 00:03:10.750 CC lib/idxd/idxd_user.o 00:03:10.750 LIB libspdk_rdma_utils.a 00:03:10.750 SYMLINK libspdk_conf.so 00:03:10.750 CC lib/idxd/idxd_kernel.o 00:03:10.750 LIB libspdk_json.a 00:03:10.750 SO libspdk_rdma_utils.so.1.0 00:03:10.750 SO libspdk_json.so.6.0 00:03:10.750 SYMLINK libspdk_rdma_utils.so 00:03:10.750 CC lib/env_dpdk/init.o 00:03:10.750 SYMLINK libspdk_json.so 00:03:10.750 CC lib/env_dpdk/threads.o 00:03:11.007 CC lib/env_dpdk/pci_ioat.o 00:03:11.007 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.007 CC lib/env_dpdk/pci_virtio.o 00:03:11.007 CC lib/env_dpdk/pci_vmd.o 00:03:11.007 CC lib/env_dpdk/pci_idxd.o 00:03:11.007 LIB libspdk_vmd.a 00:03:11.007 SO libspdk_vmd.so.6.0 00:03:11.007 CC lib/env_dpdk/pci_event.o 00:03:11.007 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.007 SYMLINK libspdk_vmd.so 00:03:11.007 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.007 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:11.007 LIB libspdk_idxd.a 00:03:11.007 SO libspdk_idxd.so.12.1 00:03:11.266 CC lib/env_dpdk/sigbus_handler.o 00:03:11.266 CC lib/env_dpdk/pci_dpdk.o 00:03:11.266 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.266 SYMLINK libspdk_idxd.so 00:03:11.266 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.266 LIB libspdk_jsonrpc.a 00:03:11.266 SO libspdk_jsonrpc.so.6.0 00:03:11.525 SYMLINK libspdk_jsonrpc.so 00:03:11.783 CC lib/rpc/rpc.o 00:03:11.783 LIB libspdk_rpc.a 00:03:11.783 LIB libspdk_env_dpdk.a 00:03:11.783 SO libspdk_rpc.so.6.0 00:03:12.042 SYMLINK libspdk_rpc.so 00:03:12.042 SO libspdk_env_dpdk.so.15.1 00:03:12.042 SYMLINK libspdk_env_dpdk.so 00:03:12.042 CC lib/trace/trace.o 00:03:12.042 CC lib/trace/trace_flags.o 00:03:12.042 CC lib/keyring/keyring_rpc.o 00:03:12.042 CC lib/notify/notify_rpc.o 00:03:12.042 CC lib/keyring/keyring.o 00:03:12.042 CC lib/trace/trace_rpc.o 00:03:12.042 CC lib/notify/notify.o 00:03:12.300 LIB libspdk_notify.a 00:03:12.300 SO libspdk_notify.so.6.0 00:03:12.300 LIB libspdk_keyring.a 00:03:12.300 SYMLINK libspdk_notify.so 00:03:12.300 LIB libspdk_trace.a 00:03:12.300 SO libspdk_keyring.so.2.0 00:03:12.300 SO libspdk_trace.so.11.0 00:03:12.300 SYMLINK libspdk_keyring.so 00:03:12.560 SYMLINK libspdk_trace.so 00:03:12.560 CC lib/thread/thread.o 00:03:12.560 CC lib/thread/iobuf.o 00:03:12.560 CC lib/sock/sock.o 00:03:12.560 CC lib/sock/sock_rpc.o 00:03:13.130 LIB libspdk_sock.a 00:03:13.130 SO libspdk_sock.so.10.0 00:03:13.130 SYMLINK libspdk_sock.so 00:03:13.389 CC lib/nvme/nvme_ctrlr.o 00:03:13.389 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.389 CC lib/nvme/nvme_fabric.o 00:03:13.389 CC lib/nvme/nvme_ns_cmd.o 00:03:13.389 CC lib/nvme/nvme_ns.o 00:03:13.389 CC lib/nvme/nvme_pcie_common.o 00:03:13.389 CC lib/nvme/nvme_pcie.o 00:03:13.389 CC lib/nvme/nvme.o 00:03:13.389 CC lib/nvme/nvme_qpair.o 00:03:13.956 CC lib/nvme/nvme_quirks.o 00:03:13.956 CC lib/nvme/nvme_transport.o 00:03:13.956 CC lib/nvme/nvme_discovery.o 00:03:13.956 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.956 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.228 LIB libspdk_thread.a 00:03:14.228 CC lib/nvme/nvme_tcp.o 00:03:14.228 SO libspdk_thread.so.11.0 00:03:14.228 CC lib/nvme/nvme_opal.o 00:03:14.228 SYMLINK libspdk_thread.so 00:03:14.228 CC lib/nvme/nvme_io_msg.o 00:03:14.520 CC lib/nvme/nvme_poll_group.o 00:03:14.520 CC lib/nvme/nvme_zns.o 00:03:14.520 CC lib/nvme/nvme_stubs.o 00:03:14.520 CC lib/nvme/nvme_auth.o 00:03:14.520 CC lib/nvme/nvme_cuse.o 00:03:14.520 CC lib/nvme/nvme_rdma.o 00:03:14.778 CC lib/accel/accel.o 00:03:15.035 CC lib/blob/blobstore.o 00:03:15.035 CC lib/blob/request.o 00:03:15.036 CC lib/blob/zeroes.o 00:03:15.036 CC lib/init/json_config.o 00:03:15.293 CC lib/blob/blob_bs_dev.o 00:03:15.294 CC lib/virtio/virtio.o 00:03:15.294 CC lib/virtio/virtio_vhost_user.o 00:03:15.294 CC lib/init/subsystem.o 00:03:15.294 CC lib/fsdev/fsdev.o 00:03:15.294 CC lib/fsdev/fsdev_io.o 00:03:15.552 CC lib/init/subsystem_rpc.o 00:03:15.552 CC lib/init/rpc.o 00:03:15.552 CC lib/virtio/virtio_vfio_user.o 00:03:15.552 CC lib/virtio/virtio_pci.o 00:03:15.552 CC lib/accel/accel_rpc.o 00:03:15.552 CC lib/accel/accel_sw.o 00:03:15.552 LIB libspdk_init.a 00:03:15.810 SO libspdk_init.so.6.0 00:03:15.810 CC lib/fsdev/fsdev_rpc.o 00:03:15.810 SYMLINK libspdk_init.so 00:03:15.810 LIB libspdk_virtio.a 00:03:15.811 SO libspdk_virtio.so.7.0 00:03:15.811 LIB libspdk_nvme.a 00:03:15.811 SYMLINK libspdk_virtio.so 00:03:15.811 CC lib/event/reactor.o 00:03:15.811 CC lib/event/app.o 00:03:15.811 CC lib/event/log_rpc.o 00:03:15.811 CC lib/event/app_rpc.o 00:03:15.811 CC lib/event/scheduler_static.o 00:03:16.071 LIB libspdk_accel.a 00:03:16.071 LIB libspdk_fsdev.a 00:03:16.071 SO libspdk_accel.so.16.0 00:03:16.071 SO libspdk_nvme.so.14.1 00:03:16.071 SO libspdk_fsdev.so.2.0 00:03:16.071 SYMLINK libspdk_accel.so 00:03:16.071 SYMLINK libspdk_fsdev.so 00:03:16.329 SYMLINK libspdk_nvme.so 00:03:16.329 CC lib/bdev/bdev.o 00:03:16.329 CC lib/bdev/bdev_rpc.o 00:03:16.329 CC lib/bdev/bdev_zone.o 00:03:16.329 CC lib/bdev/part.o 00:03:16.329 CC lib/bdev/scsi_nvme.o 00:03:16.329 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:16.329 LIB libspdk_event.a 00:03:16.329 SO libspdk_event.so.14.0 00:03:16.587 SYMLINK libspdk_event.so 00:03:16.845 LIB libspdk_fuse_dispatcher.a 00:03:16.845 SO libspdk_fuse_dispatcher.so.1.0 00:03:17.103 SYMLINK libspdk_fuse_dispatcher.so 00:03:18.477 LIB libspdk_blob.a 00:03:18.477 SO libspdk_blob.so.11.0 00:03:18.477 LIB libspdk_bdev.a 00:03:18.477 SYMLINK libspdk_blob.so 00:03:18.477 SO libspdk_bdev.so.17.0 00:03:18.477 SYMLINK libspdk_bdev.so 00:03:18.477 CC lib/blobfs/blobfs.o 00:03:18.477 CC lib/blobfs/tree.o 00:03:18.477 CC lib/lvol/lvol.o 00:03:18.477 CC lib/nvmf/ctrlr.o 00:03:18.477 CC lib/nvmf/ctrlr_discovery.o 00:03:18.477 CC lib/nvmf/ctrlr_bdev.o 00:03:18.477 CC lib/scsi/dev.o 00:03:18.477 CC lib/nbd/nbd.o 00:03:18.735 CC lib/ublk/ublk.o 00:03:18.735 CC lib/ftl/ftl_core.o 00:03:18.735 CC lib/ftl/ftl_init.o 00:03:18.735 CC lib/scsi/lun.o 00:03:18.735 CC lib/scsi/port.o 00:03:18.993 CC lib/nbd/nbd_rpc.o 00:03:18.993 CC lib/ftl/ftl_layout.o 00:03:18.993 CC lib/ftl/ftl_debug.o 00:03:18.993 CC lib/scsi/scsi.o 00:03:18.993 LIB libspdk_nbd.a 00:03:18.993 SO libspdk_nbd.so.7.0 00:03:18.993 CC lib/nvmf/subsystem.o 00:03:18.993 CC lib/scsi/scsi_bdev.o 00:03:18.993 SYMLINK libspdk_nbd.so 00:03:18.993 CC lib/ublk/ublk_rpc.o 00:03:18.993 CC lib/scsi/scsi_pr.o 00:03:19.251 CC lib/scsi/scsi_rpc.o 00:03:19.251 LIB libspdk_ublk.a 00:03:19.251 CC lib/ftl/ftl_io.o 00:03:19.251 CC lib/nvmf/nvmf.o 00:03:19.251 SO libspdk_ublk.so.3.0 00:03:19.251 CC lib/scsi/task.o 00:03:19.251 SYMLINK libspdk_ublk.so 00:03:19.251 CC lib/ftl/ftl_sb.o 00:03:19.251 LIB libspdk_blobfs.a 00:03:19.509 SO libspdk_blobfs.so.10.0 00:03:19.509 SYMLINK libspdk_blobfs.so 00:03:19.509 CC lib/ftl/ftl_l2p.o 00:03:19.509 CC lib/ftl/ftl_l2p_flat.o 00:03:19.509 CC lib/nvmf/nvmf_rpc.o 00:03:19.509 LIB libspdk_lvol.a 00:03:19.509 CC lib/ftl/ftl_nv_cache.o 00:03:19.509 SO libspdk_lvol.so.10.0 00:03:19.509 CC lib/ftl/ftl_band.o 00:03:19.509 SYMLINK libspdk_lvol.so 00:03:19.509 CC lib/ftl/ftl_band_ops.o 00:03:19.509 LIB libspdk_scsi.a 00:03:19.509 CC lib/ftl/ftl_writer.o 00:03:19.509 CC lib/ftl/ftl_rq.o 00:03:19.766 SO libspdk_scsi.so.9.0 00:03:19.767 SYMLINK libspdk_scsi.so 00:03:19.767 CC lib/ftl/ftl_reloc.o 00:03:19.767 CC lib/iscsi/conn.o 00:03:19.767 CC lib/iscsi/init_grp.o 00:03:19.767 CC lib/iscsi/iscsi.o 00:03:20.024 CC lib/iscsi/param.o 00:03:20.024 CC lib/nvmf/transport.o 00:03:20.024 CC lib/nvmf/tcp.o 00:03:20.024 CC lib/iscsi/portal_grp.o 00:03:20.024 CC lib/iscsi/tgt_node.o 00:03:20.024 CC lib/nvmf/stubs.o 00:03:20.283 CC lib/nvmf/mdns_server.o 00:03:20.283 CC lib/iscsi/iscsi_subsystem.o 00:03:20.283 CC lib/iscsi/iscsi_rpc.o 00:03:20.541 CC lib/nvmf/rdma.o 00:03:20.541 CC lib/ftl/ftl_l2p_cache.o 00:03:20.541 CC lib/ftl/ftl_p2l.o 00:03:20.541 CC lib/ftl/ftl_p2l_log.o 00:03:20.541 CC lib/ftl/mngt/ftl_mngt.o 00:03:20.541 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.541 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:20.541 CC lib/nvmf/auth.o 00:03:20.800 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.800 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.800 CC lib/vhost/vhost.o 00:03:20.800 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.800 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:21.058 CC lib/iscsi/task.o 00:03:21.058 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:21.058 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:21.058 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:21.058 CC lib/vhost/vhost_rpc.o 00:03:21.058 CC lib/vhost/vhost_scsi.o 00:03:21.316 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:21.317 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:21.317 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:21.317 LIB libspdk_iscsi.a 00:03:21.317 SO libspdk_iscsi.so.8.0 00:03:21.317 CC lib/vhost/vhost_blk.o 00:03:21.317 CC lib/ftl/utils/ftl_conf.o 00:03:21.317 CC lib/ftl/utils/ftl_md.o 00:03:21.317 CC lib/vhost/rte_vhost_user.o 00:03:21.575 SYMLINK libspdk_iscsi.so 00:03:21.575 CC lib/ftl/utils/ftl_mempool.o 00:03:21.575 CC lib/ftl/utils/ftl_bitmap.o 00:03:21.575 CC lib/ftl/utils/ftl_property.o 00:03:21.575 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:21.575 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:21.575 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:21.833 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:21.833 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:21.833 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:21.833 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:21.833 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:21.833 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:21.833 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:21.833 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:21.833 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:21.833 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:21.833 CC lib/ftl/base/ftl_base_dev.o 00:03:21.833 CC lib/ftl/base/ftl_base_bdev.o 00:03:22.092 CC lib/ftl/ftl_trace.o 00:03:22.092 LIB libspdk_ftl.a 00:03:22.092 LIB libspdk_vhost.a 00:03:22.092 LIB libspdk_nvmf.a 00:03:22.351 SO libspdk_vhost.so.8.0 00:03:22.351 SO libspdk_ftl.so.9.0 00:03:22.351 SYMLINK libspdk_vhost.so 00:03:22.351 SO libspdk_nvmf.so.20.0 00:03:22.610 SYMLINK libspdk_nvmf.so 00:03:22.610 SYMLINK libspdk_ftl.so 00:03:22.868 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.868 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.868 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.868 CC module/sock/posix/posix.o 00:03:22.868 CC module/blob/bdev/blob_bdev.o 00:03:22.868 CC module/keyring/linux/keyring.o 00:03:22.868 CC module/keyring/file/keyring.o 00:03:22.868 CC module/fsdev/aio/fsdev_aio.o 00:03:22.868 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.868 CC module/accel/error/accel_error.o 00:03:22.868 LIB libspdk_env_dpdk_rpc.a 00:03:22.868 SO libspdk_env_dpdk_rpc.so.6.0 00:03:22.868 LIB libspdk_scheduler_gscheduler.a 00:03:22.868 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.868 CC module/accel/error/accel_error_rpc.o 00:03:22.868 SO libspdk_scheduler_gscheduler.so.4.0 00:03:22.868 CC module/keyring/linux/keyring_rpc.o 00:03:22.868 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.868 CC module/keyring/file/keyring_rpc.o 00:03:22.868 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:23.127 SYMLINK libspdk_scheduler_gscheduler.so 00:03:23.127 LIB libspdk_scheduler_dynamic.a 00:03:23.127 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:23.127 SO libspdk_scheduler_dynamic.so.4.0 00:03:23.127 LIB libspdk_accel_error.a 00:03:23.127 LIB libspdk_keyring_linux.a 00:03:23.127 SO libspdk_accel_error.so.2.0 00:03:23.127 SO libspdk_keyring_linux.so.1.0 00:03:23.127 LIB libspdk_blob_bdev.a 00:03:23.127 LIB libspdk_keyring_file.a 00:03:23.127 SYMLINK libspdk_scheduler_dynamic.so 00:03:23.127 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:23.127 SO libspdk_blob_bdev.so.11.0 00:03:23.127 SO libspdk_keyring_file.so.2.0 00:03:23.127 SYMLINK libspdk_keyring_linux.so 00:03:23.127 SYMLINK libspdk_accel_error.so 00:03:23.127 CC module/fsdev/aio/linux_aio_mgr.o 00:03:23.127 CC module/accel/ioat/accel_ioat.o 00:03:23.127 CC module/accel/ioat/accel_ioat_rpc.o 00:03:23.127 SYMLINK libspdk_keyring_file.so 00:03:23.127 CC module/accel/dsa/accel_dsa.o 00:03:23.127 SYMLINK libspdk_blob_bdev.so 00:03:23.127 CC module/accel/dsa/accel_dsa_rpc.o 00:03:23.127 CC module/accel/iaa/accel_iaa.o 00:03:23.386 CC module/accel/iaa/accel_iaa_rpc.o 00:03:23.386 LIB libspdk_accel_ioat.a 00:03:23.386 SO libspdk_accel_ioat.so.6.0 00:03:23.386 CC module/bdev/delay/vbdev_delay.o 00:03:23.386 LIB libspdk_accel_dsa.a 00:03:23.386 LIB libspdk_accel_iaa.a 00:03:23.386 SYMLINK libspdk_accel_ioat.so 00:03:23.386 SO libspdk_accel_dsa.so.5.0 00:03:23.386 CC module/blobfs/bdev/blobfs_bdev.o 00:03:23.386 CC module/bdev/error/vbdev_error.o 00:03:23.386 SO libspdk_accel_iaa.so.3.0 00:03:23.386 LIB libspdk_sock_posix.a 00:03:23.386 CC module/bdev/gpt/gpt.o 00:03:23.386 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.386 SO libspdk_sock_posix.so.6.0 00:03:23.386 SYMLINK libspdk_accel_dsa.so 00:03:23.386 SYMLINK libspdk_accel_iaa.so 00:03:23.386 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.646 LIB libspdk_fsdev_aio.a 00:03:23.646 SYMLINK libspdk_sock_posix.so 00:03:23.646 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:23.646 SO libspdk_fsdev_aio.so.1.0 00:03:23.646 CC module/bdev/malloc/bdev_malloc.o 00:03:23.646 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:23.646 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.646 SYMLINK libspdk_fsdev_aio.so 00:03:23.646 CC module/bdev/null/bdev_null.o 00:03:23.646 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.646 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.646 CC module/bdev/null/bdev_null_rpc.o 00:03:23.646 LIB libspdk_bdev_delay.a 00:03:23.646 SO libspdk_bdev_delay.so.6.0 00:03:23.646 LIB libspdk_blobfs_bdev.a 00:03:23.646 SO libspdk_blobfs_bdev.so.6.0 00:03:23.904 SYMLINK libspdk_bdev_delay.so 00:03:23.904 LIB libspdk_bdev_error.a 00:03:23.904 SYMLINK libspdk_blobfs_bdev.so 00:03:23.904 SO libspdk_bdev_error.so.6.0 00:03:23.904 LIB libspdk_bdev_null.a 00:03:23.904 SYMLINK libspdk_bdev_error.so 00:03:23.904 SO libspdk_bdev_null.so.6.0 00:03:23.904 LIB libspdk_bdev_gpt.a 00:03:23.904 CC module/bdev/nvme/bdev_nvme.o 00:03:23.904 SO libspdk_bdev_gpt.so.6.0 00:03:23.904 SYMLINK libspdk_bdev_null.so 00:03:23.904 LIB libspdk_bdev_malloc.a 00:03:23.904 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.904 CC module/bdev/raid/bdev_raid.o 00:03:23.904 CC module/bdev/split/vbdev_split.o 00:03:23.904 SO libspdk_bdev_malloc.so.6.0 00:03:23.905 SYMLINK libspdk_bdev_gpt.so 00:03:23.905 LIB libspdk_bdev_lvol.a 00:03:23.905 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.905 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.905 CC module/bdev/xnvme/bdev_xnvme.o 00:03:23.905 SO libspdk_bdev_lvol.so.6.0 00:03:24.162 SYMLINK libspdk_bdev_malloc.so 00:03:24.162 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.162 CC module/bdev/aio/bdev_aio.o 00:03:24.162 SYMLINK libspdk_bdev_lvol.so 00:03:24.162 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.162 CC module/bdev/raid/bdev_raid_rpc.o 00:03:24.162 LIB libspdk_bdev_split.a 00:03:24.162 CC module/bdev/raid/bdev_raid_sb.o 00:03:24.162 SO libspdk_bdev_split.so.6.0 00:03:24.162 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:24.162 LIB libspdk_bdev_zone_block.a 00:03:24.162 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:24.162 SYMLINK libspdk_bdev_split.so 00:03:24.162 SO libspdk_bdev_zone_block.so.6.0 00:03:24.421 LIB libspdk_bdev_aio.a 00:03:24.421 CC module/bdev/ftl/bdev_ftl.o 00:03:24.421 SYMLINK libspdk_bdev_zone_block.so 00:03:24.421 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.421 CC module/bdev/raid/raid0.o 00:03:24.421 SO libspdk_bdev_aio.so.6.0 00:03:24.421 LIB libspdk_bdev_passthru.a 00:03:24.421 SO libspdk_bdev_passthru.so.6.0 00:03:24.421 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.421 SYMLINK libspdk_bdev_aio.so 00:03:24.421 LIB libspdk_bdev_xnvme.a 00:03:24.421 CC module/bdev/raid/raid1.o 00:03:24.421 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:24.421 SO libspdk_bdev_xnvme.so.3.0 00:03:24.421 SYMLINK libspdk_bdev_passthru.so 00:03:24.421 CC module/bdev/nvme/nvme_rpc.o 00:03:24.421 SYMLINK libspdk_bdev_xnvme.so 00:03:24.421 CC module/bdev/nvme/bdev_mdns_client.o 00:03:24.421 CC module/bdev/nvme/vbdev_opal.o 00:03:24.421 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.421 LIB libspdk_bdev_ftl.a 00:03:24.679 SO libspdk_bdev_ftl.so.6.0 00:03:24.679 CC module/bdev/raid/concat.o 00:03:24.679 SYMLINK libspdk_bdev_ftl.so 00:03:24.679 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.679 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.679 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.679 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.679 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.939 LIB libspdk_bdev_iscsi.a 00:03:24.939 SO libspdk_bdev_iscsi.so.6.0 00:03:24.939 SYMLINK libspdk_bdev_iscsi.so 00:03:24.939 LIB libspdk_bdev_raid.a 00:03:24.939 SO libspdk_bdev_raid.so.6.0 00:03:24.939 SYMLINK libspdk_bdev_raid.so 00:03:25.200 LIB libspdk_bdev_virtio.a 00:03:25.461 SO libspdk_bdev_virtio.so.6.0 00:03:25.461 SYMLINK libspdk_bdev_virtio.so 00:03:26.398 LIB libspdk_bdev_nvme.a 00:03:26.398 SO libspdk_bdev_nvme.so.7.0 00:03:26.398 SYMLINK libspdk_bdev_nvme.so 00:03:27.001 CC module/event/subsystems/iobuf/iobuf.o 00:03:27.001 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:27.001 CC module/event/subsystems/fsdev/fsdev.o 00:03:27.001 CC module/event/subsystems/sock/sock.o 00:03:27.001 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:27.001 CC module/event/subsystems/vmd/vmd.o 00:03:27.001 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:27.001 CC module/event/subsystems/keyring/keyring.o 00:03:27.001 CC module/event/subsystems/scheduler/scheduler.o 00:03:27.001 LIB libspdk_event_keyring.a 00:03:27.001 LIB libspdk_event_vhost_blk.a 00:03:27.001 LIB libspdk_event_sock.a 00:03:27.001 LIB libspdk_event_vmd.a 00:03:27.001 LIB libspdk_event_fsdev.a 00:03:27.001 LIB libspdk_event_iobuf.a 00:03:27.001 LIB libspdk_event_scheduler.a 00:03:27.001 SO libspdk_event_vhost_blk.so.3.0 00:03:27.001 SO libspdk_event_keyring.so.1.0 00:03:27.001 SO libspdk_event_sock.so.5.0 00:03:27.001 SO libspdk_event_vmd.so.6.0 00:03:27.001 SO libspdk_event_fsdev.so.1.0 00:03:27.001 SO libspdk_event_scheduler.so.4.0 00:03:27.001 SO libspdk_event_iobuf.so.3.0 00:03:27.001 SYMLINK libspdk_event_keyring.so 00:03:27.001 SYMLINK libspdk_event_vhost_blk.so 00:03:27.001 SYMLINK libspdk_event_sock.so 00:03:27.001 SYMLINK libspdk_event_fsdev.so 00:03:27.001 SYMLINK libspdk_event_scheduler.so 00:03:27.001 SYMLINK libspdk_event_vmd.so 00:03:27.001 SYMLINK libspdk_event_iobuf.so 00:03:27.261 CC module/event/subsystems/accel/accel.o 00:03:27.519 LIB libspdk_event_accel.a 00:03:27.519 SO libspdk_event_accel.so.6.0 00:03:27.519 SYMLINK libspdk_event_accel.so 00:03:27.778 CC module/event/subsystems/bdev/bdev.o 00:03:27.778 LIB libspdk_event_bdev.a 00:03:27.778 SO libspdk_event_bdev.so.6.0 00:03:28.036 SYMLINK libspdk_event_bdev.so 00:03:28.036 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:28.036 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:28.036 CC module/event/subsystems/nbd/nbd.o 00:03:28.036 CC module/event/subsystems/scsi/scsi.o 00:03:28.036 CC module/event/subsystems/ublk/ublk.o 00:03:28.294 LIB libspdk_event_nbd.a 00:03:28.294 LIB libspdk_event_scsi.a 00:03:28.294 SO libspdk_event_nbd.so.6.0 00:03:28.294 SO libspdk_event_scsi.so.6.0 00:03:28.294 LIB libspdk_event_ublk.a 00:03:28.294 SYMLINK libspdk_event_nbd.so 00:03:28.294 SYMLINK libspdk_event_scsi.so 00:03:28.294 SO libspdk_event_ublk.so.3.0 00:03:28.294 LIB libspdk_event_nvmf.a 00:03:28.294 SO libspdk_event_nvmf.so.6.0 00:03:28.294 SYMLINK libspdk_event_ublk.so 00:03:28.294 SYMLINK libspdk_event_nvmf.so 00:03:28.552 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:28.552 CC module/event/subsystems/iscsi/iscsi.o 00:03:28.552 LIB libspdk_event_vhost_scsi.a 00:03:28.552 LIB libspdk_event_iscsi.a 00:03:28.552 SO libspdk_event_vhost_scsi.so.3.0 00:03:28.552 SO libspdk_event_iscsi.so.6.0 00:03:28.552 SYMLINK libspdk_event_vhost_scsi.so 00:03:28.552 SYMLINK libspdk_event_iscsi.so 00:03:28.811 SO libspdk.so.6.0 00:03:28.811 SYMLINK libspdk.so 00:03:28.811 CC app/spdk_nvme_identify/identify.o 00:03:28.811 CC app/trace_record/trace_record.o 00:03:28.811 CXX app/trace/trace.o 00:03:28.811 CC app/spdk_nvme_perf/perf.o 00:03:29.068 CC app/spdk_lspci/spdk_lspci.o 00:03:29.068 CC app/nvmf_tgt/nvmf_main.o 00:03:29.068 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.068 CC test/thread/poller_perf/poller_perf.o 00:03:29.068 CC examples/util/zipf/zipf.o 00:03:29.068 CC app/spdk_tgt/spdk_tgt.o 00:03:29.068 LINK spdk_lspci 00:03:29.068 LINK nvmf_tgt 00:03:29.068 LINK poller_perf 00:03:29.068 LINK zipf 00:03:29.068 LINK iscsi_tgt 00:03:29.068 LINK spdk_trace_record 00:03:29.326 LINK spdk_tgt 00:03:29.326 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.326 LINK spdk_trace 00:03:29.326 CC app/spdk_top/spdk_top.o 00:03:29.327 CC app/spdk_dd/spdk_dd.o 00:03:29.327 CC examples/ioat/perf/perf.o 00:03:29.327 CC test/dma/test_dma/test_dma.o 00:03:29.327 LINK spdk_nvme_discover 00:03:29.585 CC app/fio/nvme/fio_plugin.o 00:03:29.585 CC test/app/bdev_svc/bdev_svc.o 00:03:29.585 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:29.585 LINK ioat_perf 00:03:29.585 CC app/fio/bdev/fio_plugin.o 00:03:29.585 LINK bdev_svc 00:03:29.844 LINK spdk_nvme_identify 00:03:29.844 LINK spdk_dd 00:03:29.844 LINK spdk_nvme_perf 00:03:29.844 CC examples/ioat/verify/verify.o 00:03:29.844 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.844 LINK test_dma 00:03:29.844 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.844 CC test/app/histogram_perf/histogram_perf.o 00:03:30.102 LINK nvme_fuzz 00:03:30.102 LINK verify 00:03:30.102 LINK spdk_nvme 00:03:30.102 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.102 CC examples/vmd/lsvmd/lsvmd.o 00:03:30.102 LINK spdk_top 00:03:30.102 LINK histogram_perf 00:03:30.102 LINK spdk_bdev 00:03:30.102 CC test/app/jsoncat/jsoncat.o 00:03:30.102 LINK lsvmd 00:03:30.102 CC test/app/stub/stub.o 00:03:30.102 CC examples/vmd/led/led.o 00:03:30.102 CC examples/idxd/perf/perf.o 00:03:30.360 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:30.360 LINK jsoncat 00:03:30.360 CC app/vhost/vhost.o 00:03:30.360 LINK led 00:03:30.360 LINK stub 00:03:30.360 CC examples/thread/thread/thread_ex.o 00:03:30.360 CC examples/sock/hello_world/hello_sock.o 00:03:30.360 LINK vhost_fuzz 00:03:30.360 LINK interrupt_tgt 00:03:30.618 LINK vhost 00:03:30.618 TEST_HEADER include/spdk/accel.h 00:03:30.618 TEST_HEADER include/spdk/accel_module.h 00:03:30.618 TEST_HEADER include/spdk/assert.h 00:03:30.618 TEST_HEADER include/spdk/barrier.h 00:03:30.618 TEST_HEADER include/spdk/base64.h 00:03:30.618 TEST_HEADER include/spdk/bdev.h 00:03:30.618 TEST_HEADER include/spdk/bdev_module.h 00:03:30.618 TEST_HEADER include/spdk/bdev_zone.h 00:03:30.618 TEST_HEADER include/spdk/bit_array.h 00:03:30.618 TEST_HEADER include/spdk/bit_pool.h 00:03:30.618 TEST_HEADER include/spdk/blob_bdev.h 00:03:30.618 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:30.618 TEST_HEADER include/spdk/blobfs.h 00:03:30.618 TEST_HEADER include/spdk/blob.h 00:03:30.618 TEST_HEADER include/spdk/conf.h 00:03:30.618 TEST_HEADER include/spdk/config.h 00:03:30.618 TEST_HEADER include/spdk/cpuset.h 00:03:30.618 TEST_HEADER include/spdk/crc16.h 00:03:30.618 TEST_HEADER include/spdk/crc32.h 00:03:30.618 TEST_HEADER include/spdk/crc64.h 00:03:30.618 TEST_HEADER include/spdk/dif.h 00:03:30.618 TEST_HEADER include/spdk/dma.h 00:03:30.618 TEST_HEADER include/spdk/endian.h 00:03:30.618 TEST_HEADER include/spdk/env_dpdk.h 00:03:30.618 TEST_HEADER include/spdk/env.h 00:03:30.618 LINK idxd_perf 00:03:30.618 TEST_HEADER include/spdk/event.h 00:03:30.618 TEST_HEADER include/spdk/fd_group.h 00:03:30.618 TEST_HEADER include/spdk/fd.h 00:03:30.618 TEST_HEADER include/spdk/file.h 00:03:30.618 TEST_HEADER include/spdk/fsdev.h 00:03:30.618 TEST_HEADER include/spdk/fsdev_module.h 00:03:30.618 TEST_HEADER include/spdk/ftl.h 00:03:30.618 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:30.618 TEST_HEADER include/spdk/gpt_spec.h 00:03:30.618 TEST_HEADER include/spdk/hexlify.h 00:03:30.618 TEST_HEADER include/spdk/histogram_data.h 00:03:30.618 TEST_HEADER include/spdk/idxd.h 00:03:30.618 TEST_HEADER include/spdk/idxd_spec.h 00:03:30.618 TEST_HEADER include/spdk/init.h 00:03:30.618 TEST_HEADER include/spdk/ioat.h 00:03:30.618 TEST_HEADER include/spdk/ioat_spec.h 00:03:30.618 TEST_HEADER include/spdk/iscsi_spec.h 00:03:30.618 TEST_HEADER include/spdk/json.h 00:03:30.618 TEST_HEADER include/spdk/jsonrpc.h 00:03:30.618 TEST_HEADER include/spdk/keyring.h 00:03:30.618 TEST_HEADER include/spdk/keyring_module.h 00:03:30.618 TEST_HEADER include/spdk/likely.h 00:03:30.618 TEST_HEADER include/spdk/log.h 00:03:30.618 TEST_HEADER include/spdk/lvol.h 00:03:30.618 TEST_HEADER include/spdk/md5.h 00:03:30.618 TEST_HEADER include/spdk/memory.h 00:03:30.618 TEST_HEADER include/spdk/mmio.h 00:03:30.618 TEST_HEADER include/spdk/nbd.h 00:03:30.618 TEST_HEADER include/spdk/net.h 00:03:30.618 TEST_HEADER include/spdk/notify.h 00:03:30.618 TEST_HEADER include/spdk/nvme.h 00:03:30.618 TEST_HEADER include/spdk/nvme_intel.h 00:03:30.618 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:30.618 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:30.618 TEST_HEADER include/spdk/nvme_spec.h 00:03:30.618 TEST_HEADER include/spdk/nvme_zns.h 00:03:30.618 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:30.618 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:30.618 TEST_HEADER include/spdk/nvmf.h 00:03:30.618 TEST_HEADER include/spdk/nvmf_spec.h 00:03:30.618 TEST_HEADER include/spdk/nvmf_transport.h 00:03:30.618 TEST_HEADER include/spdk/opal.h 00:03:30.618 TEST_HEADER include/spdk/opal_spec.h 00:03:30.618 TEST_HEADER include/spdk/pci_ids.h 00:03:30.618 CC test/event/event_perf/event_perf.o 00:03:30.618 LINK thread 00:03:30.618 TEST_HEADER include/spdk/pipe.h 00:03:30.618 TEST_HEADER include/spdk/queue.h 00:03:30.618 TEST_HEADER include/spdk/reduce.h 00:03:30.618 TEST_HEADER include/spdk/rpc.h 00:03:30.618 TEST_HEADER include/spdk/scheduler.h 00:03:30.618 TEST_HEADER include/spdk/scsi.h 00:03:30.618 TEST_HEADER include/spdk/scsi_spec.h 00:03:30.618 TEST_HEADER include/spdk/sock.h 00:03:30.618 TEST_HEADER include/spdk/stdinc.h 00:03:30.618 TEST_HEADER include/spdk/string.h 00:03:30.618 TEST_HEADER include/spdk/thread.h 00:03:30.619 TEST_HEADER include/spdk/trace.h 00:03:30.619 TEST_HEADER include/spdk/trace_parser.h 00:03:30.619 TEST_HEADER include/spdk/tree.h 00:03:30.619 TEST_HEADER include/spdk/ublk.h 00:03:30.619 TEST_HEADER include/spdk/util.h 00:03:30.619 TEST_HEADER include/spdk/uuid.h 00:03:30.619 TEST_HEADER include/spdk/version.h 00:03:30.619 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:30.619 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:30.619 TEST_HEADER include/spdk/vhost.h 00:03:30.619 TEST_HEADER include/spdk/vmd.h 00:03:30.619 TEST_HEADER include/spdk/xor.h 00:03:30.619 TEST_HEADER include/spdk/zipf.h 00:03:30.619 CXX test/cpp_headers/accel.o 00:03:30.619 CC test/env/mem_callbacks/mem_callbacks.o 00:03:30.619 CC test/nvme/aer/aer.o 00:03:30.619 CC test/event/reactor/reactor.o 00:03:30.619 LINK hello_sock 00:03:30.619 CC test/nvme/reset/reset.o 00:03:30.877 LINK event_perf 00:03:30.877 CC test/rpc_client/rpc_client_test.o 00:03:30.877 CXX test/cpp_headers/accel_module.o 00:03:30.877 LINK reactor 00:03:30.877 CC test/nvme/sgl/sgl.o 00:03:30.877 CC test/event/reactor_perf/reactor_perf.o 00:03:30.877 CXX test/cpp_headers/assert.o 00:03:30.877 CC examples/accel/perf/accel_perf.o 00:03:30.877 LINK aer 00:03:30.877 LINK rpc_client_test 00:03:30.877 LINK reset 00:03:31.135 CC test/nvme/e2edp/nvme_dp.o 00:03:31.135 CXX test/cpp_headers/barrier.o 00:03:31.135 LINK reactor_perf 00:03:31.135 LINK sgl 00:03:31.135 LINK mem_callbacks 00:03:31.135 CC test/env/vtophys/vtophys.o 00:03:31.135 CXX test/cpp_headers/base64.o 00:03:31.398 LINK nvme_dp 00:03:31.398 CC test/accel/dif/dif.o 00:03:31.398 LINK vtophys 00:03:31.398 CC test/event/app_repeat/app_repeat.o 00:03:31.398 CC test/blobfs/mkfs/mkfs.o 00:03:31.398 CC test/nvme/overhead/overhead.o 00:03:31.398 CC test/nvme/err_injection/err_injection.o 00:03:31.398 CXX test/cpp_headers/bdev.o 00:03:31.398 LINK accel_perf 00:03:31.398 LINK app_repeat 00:03:31.398 CC test/nvme/startup/startup.o 00:03:31.398 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:31.398 LINK mkfs 00:03:31.676 LINK err_injection 00:03:31.676 CXX test/cpp_headers/bdev_module.o 00:03:31.676 LINK overhead 00:03:31.676 LINK iscsi_fuzz 00:03:31.676 LINK startup 00:03:31.676 LINK env_dpdk_post_init 00:03:31.676 CC test/event/scheduler/scheduler.o 00:03:31.676 CXX test/cpp_headers/bdev_zone.o 00:03:31.676 CC test/nvme/reserve/reserve.o 00:03:31.676 CC examples/blob/hello_world/hello_blob.o 00:03:31.934 CC examples/nvme/hello_world/hello_world.o 00:03:31.934 CC test/nvme/simple_copy/simple_copy.o 00:03:31.934 CC test/env/memory/memory_ut.o 00:03:31.934 CC test/env/pci/pci_ut.o 00:03:31.934 CXX test/cpp_headers/bit_array.o 00:03:31.934 LINK scheduler 00:03:31.934 CC test/lvol/esnap/esnap.o 00:03:31.934 LINK reserve 00:03:31.934 LINK hello_blob 00:03:31.934 CXX test/cpp_headers/bit_pool.o 00:03:31.934 LINK dif 00:03:31.934 LINK hello_world 00:03:31.934 LINK simple_copy 00:03:32.192 CXX test/cpp_headers/blob_bdev.o 00:03:32.192 CC test/nvme/connect_stress/connect_stress.o 00:03:32.192 CC test/nvme/boot_partition/boot_partition.o 00:03:32.192 CC test/nvme/compliance/nvme_compliance.o 00:03:32.192 CC test/nvme/fused_ordering/fused_ordering.o 00:03:32.192 LINK pci_ut 00:03:32.192 CC examples/nvme/reconnect/reconnect.o 00:03:32.192 CC examples/blob/cli/blobcli.o 00:03:32.192 CXX test/cpp_headers/blobfs_bdev.o 00:03:32.192 LINK connect_stress 00:03:32.192 LINK boot_partition 00:03:32.450 LINK fused_ordering 00:03:32.450 CXX test/cpp_headers/blobfs.o 00:03:32.450 CXX test/cpp_headers/blob.o 00:03:32.450 CXX test/cpp_headers/conf.o 00:03:32.450 CXX test/cpp_headers/config.o 00:03:32.450 LINK nvme_compliance 00:03:32.708 LINK reconnect 00:03:32.708 CC test/bdev/bdevio/bdevio.o 00:03:32.708 CXX test/cpp_headers/cpuset.o 00:03:32.708 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:32.708 CC examples/nvme/arbitration/arbitration.o 00:03:32.708 CC examples/nvme/hotplug/hotplug.o 00:03:32.708 CXX test/cpp_headers/crc16.o 00:03:32.708 LINK blobcli 00:03:32.708 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:32.708 CC test/nvme/fdp/fdp.o 00:03:32.708 LINK memory_ut 00:03:32.965 LINK hotplug 00:03:32.965 CXX test/cpp_headers/crc32.o 00:03:32.965 CXX test/cpp_headers/crc64.o 00:03:32.965 LINK doorbell_aers 00:03:32.965 LINK arbitration 00:03:32.965 LINK bdevio 00:03:32.965 CXX test/cpp_headers/dif.o 00:03:32.965 LINK fdp 00:03:32.965 CXX test/cpp_headers/dma.o 00:03:32.965 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:33.223 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:33.223 LINK nvme_manage 00:03:33.223 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.224 CXX test/cpp_headers/endian.o 00:03:33.224 CC examples/bdev/bdevperf/bdevperf.o 00:03:33.224 CXX test/cpp_headers/env_dpdk.o 00:03:33.224 CC test/nvme/cuse/cuse.o 00:03:33.224 LINK cmb_copy 00:03:33.224 CXX test/cpp_headers/env.o 00:03:33.224 CC examples/nvme/abort/abort.o 00:03:33.224 CXX test/cpp_headers/event.o 00:03:33.482 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:33.482 LINK hello_bdev 00:03:33.482 LINK hello_fsdev 00:03:33.482 CXX test/cpp_headers/fd_group.o 00:03:33.482 CXX test/cpp_headers/fd.o 00:03:33.482 CXX test/cpp_headers/file.o 00:03:33.482 CXX test/cpp_headers/fsdev.o 00:03:33.482 LINK pmr_persistence 00:03:33.482 CXX test/cpp_headers/fsdev_module.o 00:03:33.482 CXX test/cpp_headers/ftl.o 00:03:33.482 CXX test/cpp_headers/fuse_dispatcher.o 00:03:33.482 CXX test/cpp_headers/gpt_spec.o 00:03:33.740 CXX test/cpp_headers/hexlify.o 00:03:33.740 CXX test/cpp_headers/histogram_data.o 00:03:33.740 CXX test/cpp_headers/idxd.o 00:03:33.740 CXX test/cpp_headers/idxd_spec.o 00:03:33.740 LINK abort 00:03:33.740 CXX test/cpp_headers/init.o 00:03:33.740 CXX test/cpp_headers/ioat.o 00:03:33.740 CXX test/cpp_headers/ioat_spec.o 00:03:33.740 CXX test/cpp_headers/iscsi_spec.o 00:03:33.740 CXX test/cpp_headers/json.o 00:03:33.740 CXX test/cpp_headers/jsonrpc.o 00:03:33.740 CXX test/cpp_headers/keyring.o 00:03:33.740 CXX test/cpp_headers/keyring_module.o 00:03:33.998 CXX test/cpp_headers/likely.o 00:03:33.998 CXX test/cpp_headers/log.o 00:03:33.998 CXX test/cpp_headers/lvol.o 00:03:33.998 CXX test/cpp_headers/md5.o 00:03:33.998 CXX test/cpp_headers/memory.o 00:03:33.998 CXX test/cpp_headers/mmio.o 00:03:33.999 CXX test/cpp_headers/nbd.o 00:03:33.999 LINK bdevperf 00:03:33.999 CXX test/cpp_headers/net.o 00:03:33.999 CXX test/cpp_headers/notify.o 00:03:33.999 CXX test/cpp_headers/nvme.o 00:03:33.999 CXX test/cpp_headers/nvme_intel.o 00:03:33.999 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.999 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.999 CXX test/cpp_headers/nvme_spec.o 00:03:34.257 CXX test/cpp_headers/nvme_zns.o 00:03:34.257 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.257 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.257 CXX test/cpp_headers/nvmf.o 00:03:34.257 CXX test/cpp_headers/nvmf_spec.o 00:03:34.257 CXX test/cpp_headers/nvmf_transport.o 00:03:34.257 CXX test/cpp_headers/opal.o 00:03:34.257 CXX test/cpp_headers/opal_spec.o 00:03:34.257 CXX test/cpp_headers/pci_ids.o 00:03:34.257 CC examples/nvmf/nvmf/nvmf.o 00:03:34.257 CXX test/cpp_headers/pipe.o 00:03:34.257 CXX test/cpp_headers/queue.o 00:03:34.257 CXX test/cpp_headers/reduce.o 00:03:34.257 CXX test/cpp_headers/rpc.o 00:03:34.515 CXX test/cpp_headers/scheduler.o 00:03:34.515 CXX test/cpp_headers/scsi.o 00:03:34.515 CXX test/cpp_headers/scsi_spec.o 00:03:34.515 CXX test/cpp_headers/sock.o 00:03:34.515 CXX test/cpp_headers/stdinc.o 00:03:34.515 LINK cuse 00:03:34.515 CXX test/cpp_headers/string.o 00:03:34.515 CXX test/cpp_headers/thread.o 00:03:34.515 CXX test/cpp_headers/trace.o 00:03:34.515 LINK nvmf 00:03:34.515 CXX test/cpp_headers/trace_parser.o 00:03:34.515 CXX test/cpp_headers/tree.o 00:03:34.515 CXX test/cpp_headers/ublk.o 00:03:34.515 CXX test/cpp_headers/util.o 00:03:34.515 CXX test/cpp_headers/uuid.o 00:03:34.515 CXX test/cpp_headers/version.o 00:03:34.515 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.515 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.515 CXX test/cpp_headers/vhost.o 00:03:34.515 CXX test/cpp_headers/vmd.o 00:03:34.515 CXX test/cpp_headers/xor.o 00:03:34.775 CXX test/cpp_headers/zipf.o 00:03:37.318 LINK esnap 00:03:37.318 00:03:37.318 real 1m4.610s 00:03:37.318 user 6m2.616s 00:03:37.318 sys 1m3.042s 00:03:37.318 17:44:55 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:37.318 ************************************ 00:03:37.319 END TEST make 00:03:37.319 17:44:55 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.319 ************************************ 00:03:37.319 17:44:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.319 17:44:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.319 17:44:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.319 17:44:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.319 17:44:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.319 17:44:55 -- pm/common@44 -- $ pid=5063 00:03:37.319 17:44:55 -- pm/common@50 -- $ kill -TERM 5063 00:03:37.319 17:44:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.319 17:44:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.319 17:44:55 -- pm/common@44 -- $ pid=5064 00:03:37.319 17:44:55 -- pm/common@50 -- $ kill -TERM 5064 00:03:37.319 17:44:55 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:03:37.319 17:44:55 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:03:37.319 17:44:55 -- common/autotest_common.sh@1689 -- # lcov --version 00:03:37.319 17:44:55 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:03:37.319 17:44:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.319 17:44:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.319 17:44:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.319 17:44:55 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.319 17:44:55 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.319 17:44:55 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.319 17:44:55 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.319 17:44:55 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.319 17:44:55 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.319 17:44:55 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.319 17:44:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.319 17:44:55 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.319 17:44:55 -- scripts/common.sh@345 -- # : 1 00:03:37.319 17:44:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.319 17:44:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.319 17:44:55 -- scripts/common.sh@365 -- # decimal 1 00:03:37.319 17:44:55 -- scripts/common.sh@353 -- # local d=1 00:03:37.319 17:44:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.319 17:44:55 -- scripts/common.sh@355 -- # echo 1 00:03:37.319 17:44:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.319 17:44:55 -- scripts/common.sh@366 -- # decimal 2 00:03:37.319 17:44:55 -- scripts/common.sh@353 -- # local d=2 00:03:37.319 17:44:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.319 17:44:55 -- scripts/common.sh@355 -- # echo 2 00:03:37.319 17:44:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.319 17:44:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.319 17:44:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.319 17:44:55 -- scripts/common.sh@368 -- # return 0 00:03:37.319 17:44:55 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.319 17:44:55 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:03:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.319 --rc genhtml_branch_coverage=1 00:03:37.319 --rc genhtml_function_coverage=1 00:03:37.319 --rc genhtml_legend=1 00:03:37.319 --rc geninfo_all_blocks=1 00:03:37.319 --rc geninfo_unexecuted_blocks=1 00:03:37.319 00:03:37.319 ' 00:03:37.319 17:44:55 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:03:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.319 --rc genhtml_branch_coverage=1 00:03:37.319 --rc genhtml_function_coverage=1 00:03:37.319 --rc genhtml_legend=1 00:03:37.319 --rc geninfo_all_blocks=1 00:03:37.319 --rc geninfo_unexecuted_blocks=1 00:03:37.319 00:03:37.319 ' 00:03:37.319 17:44:55 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:03:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.319 --rc genhtml_branch_coverage=1 00:03:37.319 --rc genhtml_function_coverage=1 00:03:37.319 --rc genhtml_legend=1 00:03:37.319 --rc geninfo_all_blocks=1 00:03:37.319 --rc geninfo_unexecuted_blocks=1 00:03:37.319 00:03:37.319 ' 00:03:37.319 17:44:55 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:03:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.319 --rc genhtml_branch_coverage=1 00:03:37.319 --rc genhtml_function_coverage=1 00:03:37.319 --rc genhtml_legend=1 00:03:37.319 --rc geninfo_all_blocks=1 00:03:37.319 --rc geninfo_unexecuted_blocks=1 00:03:37.319 00:03:37.319 ' 00:03:37.319 17:44:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:37.319 17:44:55 -- nvmf/common.sh@7 -- # uname -s 00:03:37.319 17:44:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.319 17:44:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.319 17:44:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.319 17:44:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.319 17:44:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.319 17:44:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.319 17:44:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.319 17:44:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.319 17:44:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.319 17:44:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.319 17:44:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5947e2df-d125-4472-98fc-d86088b051d0 00:03:37.319 17:44:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5947e2df-d125-4472-98fc-d86088b051d0 00:03:37.319 17:44:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.319 17:44:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.319 17:44:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:37.319 17:44:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.319 17:44:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:37.319 17:44:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.319 17:44:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.319 17:44:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.319 17:44:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.319 17:44:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.319 17:44:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.319 17:44:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.319 17:44:55 -- paths/export.sh@5 -- # export PATH 00:03:37.319 17:44:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.319 17:44:55 -- nvmf/common.sh@51 -- # : 0 00:03:37.319 17:44:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.319 17:44:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.319 17:44:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.319 17:44:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.319 17:44:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.319 17:44:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.319 17:44:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.319 17:44:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.319 17:44:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.319 17:44:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.319 17:44:55 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.319 17:44:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.319 17:44:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.319 17:44:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.319 17:44:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.319 17:44:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.319 17:44:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.319 17:44:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.319 17:44:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.319 17:44:55 -- spdk/autotest.sh@48 -- # udevadm_pid=54200 00:03:37.319 17:44:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.319 17:44:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.319 17:44:55 -- pm/common@17 -- # local monitor 00:03:37.319 17:44:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.319 17:44:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.319 17:44:55 -- pm/common@25 -- # sleep 1 00:03:37.319 17:44:55 -- pm/common@21 -- # date +%s 00:03:37.320 17:44:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729878295 00:03:37.320 17:44:55 -- pm/common@21 -- # date +%s 00:03:37.320 17:44:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729878295 00:03:37.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729878295_collect-cpu-load.pm.log 00:03:37.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729878295_collect-vmstat.pm.log 00:03:38.261 17:44:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.261 17:44:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:38.261 17:44:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.261 17:44:56 -- common/autotest_common.sh@10 -- # set +x 00:03:38.261 17:44:56 -- spdk/autotest.sh@59 -- # create_test_list 00:03:38.261 17:44:56 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:38.261 17:44:56 -- common/autotest_common.sh@10 -- # set +x 00:03:38.261 17:44:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:38.261 17:44:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:38.261 17:44:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:38.261 17:44:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:38.261 17:44:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:38.261 17:44:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:38.577 17:44:56 -- common/autotest_common.sh@1453 -- # uname 00:03:38.577 17:44:56 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:38.577 17:44:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:38.577 17:44:56 -- common/autotest_common.sh@1473 -- # uname 00:03:38.577 17:44:56 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:38.577 17:44:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:38.577 17:44:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:38.577 lcov: LCOV version 1.15 00:03:38.577 17:44:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.473 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:53.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.386 17:45:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.386 17:45:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.386 17:45:25 -- common/autotest_common.sh@10 -- # set +x 00:04:08.386 17:45:25 -- spdk/autotest.sh@78 -- # rm -f 00:04:08.386 17:45:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.386 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.386 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.386 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:08.644 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:08.644 17:45:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.644 17:45:26 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:04:08.644 17:45:26 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:04:08.644 17:45:26 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2c2n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme2c2n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n2 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme3n2 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:08.644 17:45:26 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n3 00:04:08.644 17:45:26 -- common/autotest_common.sh@1646 -- # local device=nvme3n3 00:04:08.644 17:45:26 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:08.644 17:45:26 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:08.644 17:45:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.644 17:45:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.644 17:45:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.644 17:45:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.644 17:45:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.644 17:45:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.644 No valid GPT data, bailing 00:04:08.644 17:45:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.644 17:45:26 -- scripts/common.sh@394 -- # pt= 00:04:08.644 17:45:26 -- scripts/common.sh@395 -- # return 1 00:04:08.644 17:45:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.644 1+0 records in 00:04:08.644 1+0 records out 00:04:08.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00913271 s, 115 MB/s 00:04:08.644 17:45:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.644 17:45:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.644 17:45:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:08.644 17:45:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:08.644 17:45:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.644 No valid GPT data, bailing 00:04:08.644 17:45:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.644 17:45:26 -- scripts/common.sh@394 -- # pt= 00:04:08.644 17:45:26 -- scripts/common.sh@395 -- # return 1 00:04:08.644 17:45:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.644 1+0 records in 00:04:08.644 1+0 records out 00:04:08.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381576 s, 275 MB/s 00:04:08.644 17:45:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.644 17:45:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.644 17:45:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:08.644 17:45:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:08.644 17:45:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:08.644 No valid GPT data, bailing 00:04:08.644 17:45:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:08.644 17:45:27 -- scripts/common.sh@394 -- # pt= 00:04:08.644 17:45:27 -- scripts/common.sh@395 -- # return 1 00:04:08.644 17:45:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:08.644 1+0 records in 00:04:08.644 1+0 records out 00:04:08.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364004 s, 288 MB/s 00:04:08.644 17:45:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.644 17:45:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.644 17:45:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:08.644 17:45:27 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:08.644 17:45:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:08.644 No valid GPT data, bailing 00:04:08.902 17:45:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:08.902 17:45:27 -- scripts/common.sh@394 -- # pt= 00:04:08.902 17:45:27 -- scripts/common.sh@395 -- # return 1 00:04:08.902 17:45:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:08.902 1+0 records in 00:04:08.902 1+0 records out 00:04:08.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394084 s, 266 MB/s 00:04:08.902 17:45:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.902 17:45:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.902 17:45:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:08.902 17:45:27 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:08.902 17:45:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:08.902 No valid GPT data, bailing 00:04:08.902 17:45:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:08.902 17:45:27 -- scripts/common.sh@394 -- # pt= 00:04:08.902 17:45:27 -- scripts/common.sh@395 -- # return 1 00:04:08.902 17:45:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:08.902 1+0 records in 00:04:08.902 1+0 records out 00:04:08.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00257994 s, 406 MB/s 00:04:08.903 17:45:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.903 17:45:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.903 17:45:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:08.903 17:45:27 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:08.903 17:45:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:08.903 No valid GPT data, bailing 00:04:08.903 17:45:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:08.903 17:45:27 -- scripts/common.sh@394 -- # pt= 00:04:08.903 17:45:27 -- scripts/common.sh@395 -- # return 1 00:04:08.903 17:45:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:08.903 1+0 records in 00:04:08.903 1+0 records out 00:04:08.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043057 s, 244 MB/s 00:04:08.903 17:45:27 -- spdk/autotest.sh@105 -- # sync 00:04:08.903 17:45:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.903 17:45:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.903 17:45:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.801 17:45:28 -- spdk/autotest.sh@111 -- # uname -s 00:04:10.801 17:45:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:10.801 17:45:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:10.801 17:45:28 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:10.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.073 Hugepages 00:04:11.073 node hugesize free / total 00:04:11.073 node0 1048576kB 0 / 0 00:04:11.074 node0 2048kB 0 / 0 00:04:11.074 00:04:11.074 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.333 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.333 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.333 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:11.333 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:11.591 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:11.591 17:45:29 -- spdk/autotest.sh@117 -- # uname -s 00:04:11.591 17:45:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:11.591 17:45:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:11.591 17:45:29 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.418 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.418 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.418 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.418 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.418 17:45:30 -- common/autotest_common.sh@1513 -- # sleep 1 00:04:13.349 17:45:31 -- common/autotest_common.sh@1514 -- # bdfs=() 00:04:13.349 17:45:31 -- common/autotest_common.sh@1514 -- # local bdfs 00:04:13.349 17:45:31 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.349 17:45:31 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:04:13.349 17:45:31 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:13.349 17:45:31 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:13.349 17:45:31 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.349 17:45:31 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.349 17:45:31 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:13.607 17:45:31 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:04:13.607 17:45:31 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:13.607 17:45:31 -- common/autotest_common.sh@1518 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.864 Waiting for block devices as requested 00:04:13.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.121 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.121 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.121 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:19.382 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:19.382 17:45:37 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:19.382 17:45:37 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:19.382 17:45:37 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:19.382 17:45:37 -- common/autotest_common.sh@1483 -- # grep 0000:00:10.0/nvme/nvme 00:04:19.382 17:45:37 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:19.382 17:45:37 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:19.382 17:45:37 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:19.382 17:45:37 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme1 00:04:19.382 17:45:37 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme1 00:04:19.382 17:45:37 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme1 ]] 00:04:19.382 17:45:37 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:19.382 17:45:37 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme1 00:04:19.382 17:45:37 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:19.382 17:45:37 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:19.382 17:45:37 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:19.382 17:45:37 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:19.382 17:45:37 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:19.382 17:45:37 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme1 00:04:19.382 17:45:37 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:19.382 17:45:37 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:19.382 17:45:37 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1539 -- # continue 00:04:19.383 17:45:37 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:19.383 17:45:37 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # grep 0000:00:11.0/nvme/nvme 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:19.383 17:45:37 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:19.383 17:45:37 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:19.383 17:45:37 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1539 -- # continue 00:04:19.383 17:45:37 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:19.383 17:45:37 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # grep 0000:00:12.0/nvme/nvme 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme2 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:19.383 17:45:37 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:19.383 17:45:37 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:19.383 17:45:37 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1539 -- # continue 00:04:19.383 17:45:37 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:19.383 17:45:37 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # grep 0000:00:13.0/nvme/nvme 00:04:19.383 17:45:37 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme3 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:19.383 17:45:37 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:19.383 17:45:37 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme3 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:19.383 17:45:37 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:19.383 17:45:37 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:19.383 17:45:37 -- common/autotest_common.sh@1539 -- # continue 00:04:19.383 17:45:37 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:19.383 17:45:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.383 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:04:19.383 17:45:37 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:19.383 17:45:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.383 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:04:19.383 17:45:37 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.208 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.208 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.208 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.208 17:45:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:20.208 17:45:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.208 17:45:38 -- common/autotest_common.sh@10 -- # set +x 00:04:20.467 17:45:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:20.467 17:45:38 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:20.467 17:45:38 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:20.467 17:45:38 -- common/autotest_common.sh@1559 -- # bdfs=() 00:04:20.467 17:45:38 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:04:20.467 17:45:38 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:04:20.467 17:45:38 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:04:20.467 17:45:38 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:04:20.467 17:45:38 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:20.467 17:45:38 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:20.467 17:45:38 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.467 17:45:38 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:20.467 17:45:38 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.467 17:45:38 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:04:20.467 17:45:38 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:20.468 17:45:38 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:20.468 17:45:38 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.468 17:45:38 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:20.468 17:45:38 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.468 17:45:38 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:20.468 17:45:38 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.468 17:45:38 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:20.468 17:45:38 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:20.468 17:45:38 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.468 17:45:38 -- common/autotest_common.sh@1568 -- # (( 0 > 0 )) 00:04:20.468 17:45:38 -- common/autotest_common.sh@1568 -- # return 0 00:04:20.468 17:45:38 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:04:20.468 17:45:38 -- common/autotest_common.sh@1576 -- # return 0 00:04:20.468 17:45:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.468 17:45:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.468 17:45:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.468 17:45:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.468 17:45:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.468 17:45:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.468 17:45:38 -- common/autotest_common.sh@10 -- # set +x 00:04:20.468 17:45:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.468 17:45:38 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.468 17:45:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.468 17:45:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.468 17:45:38 -- common/autotest_common.sh@10 -- # set +x 00:04:20.468 ************************************ 00:04:20.468 START TEST env 00:04:20.468 ************************************ 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.468 * Looking for test storage... 00:04:20.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1689 -- # lcov --version 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:20.468 17:45:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.468 17:45:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.468 17:45:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.468 17:45:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.468 17:45:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.468 17:45:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.468 17:45:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.468 17:45:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.468 17:45:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.468 17:45:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.468 17:45:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.468 17:45:38 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.468 17:45:38 env -- scripts/common.sh@345 -- # : 1 00:04:20.468 17:45:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.468 17:45:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.468 17:45:38 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.468 17:45:38 env -- scripts/common.sh@353 -- # local d=1 00:04:20.468 17:45:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.468 17:45:38 env -- scripts/common.sh@355 -- # echo 1 00:04:20.468 17:45:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.468 17:45:38 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.468 17:45:38 env -- scripts/common.sh@353 -- # local d=2 00:04:20.468 17:45:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.468 17:45:38 env -- scripts/common.sh@355 -- # echo 2 00:04:20.468 17:45:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.468 17:45:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.468 17:45:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.468 17:45:38 env -- scripts/common.sh@368 -- # return 0 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:20.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.468 --rc genhtml_branch_coverage=1 00:04:20.468 --rc genhtml_function_coverage=1 00:04:20.468 --rc genhtml_legend=1 00:04:20.468 --rc geninfo_all_blocks=1 00:04:20.468 --rc geninfo_unexecuted_blocks=1 00:04:20.468 00:04:20.468 ' 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:20.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.468 --rc genhtml_branch_coverage=1 00:04:20.468 --rc genhtml_function_coverage=1 00:04:20.468 --rc genhtml_legend=1 00:04:20.468 --rc geninfo_all_blocks=1 00:04:20.468 --rc geninfo_unexecuted_blocks=1 00:04:20.468 00:04:20.468 ' 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:20.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.468 --rc genhtml_branch_coverage=1 00:04:20.468 --rc genhtml_function_coverage=1 00:04:20.468 --rc genhtml_legend=1 00:04:20.468 --rc geninfo_all_blocks=1 00:04:20.468 --rc geninfo_unexecuted_blocks=1 00:04:20.468 00:04:20.468 ' 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:20.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.468 --rc genhtml_branch_coverage=1 00:04:20.468 --rc genhtml_function_coverage=1 00:04:20.468 --rc genhtml_legend=1 00:04:20.468 --rc geninfo_all_blocks=1 00:04:20.468 --rc geninfo_unexecuted_blocks=1 00:04:20.468 00:04:20.468 ' 00:04:20.468 17:45:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.468 17:45:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.468 17:45:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.468 ************************************ 00:04:20.468 START TEST env_memory 00:04:20.468 ************************************ 00:04:20.468 17:45:38 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.727 00:04:20.727 00:04:20.727 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.727 http://cunit.sourceforge.net/ 00:04:20.727 00:04:20.727 00:04:20.727 Suite: memory 00:04:20.727 Test: alloc and free memory map ...[2024-10-25 17:45:38.945715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.727 passed 00:04:20.727 Test: mem map translation ...[2024-10-25 17:45:38.984286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.727 [2024-10-25 17:45:38.984324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.727 [2024-10-25 17:45:38.984382] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.727 [2024-10-25 17:45:38.984397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.727 passed 00:04:20.727 Test: mem map registration ...[2024-10-25 17:45:39.052309] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.727 [2024-10-25 17:45:39.052341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.727 passed 00:04:20.727 Test: mem map adjacent registrations ...passed 00:04:20.727 00:04:20.727 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.727 suites 1 1 n/a 0 0 00:04:20.727 tests 4 4 4 0 0 00:04:20.727 asserts 152 152 152 0 n/a 00:04:20.727 00:04:20.727 Elapsed time = 0.232 seconds 00:04:20.727 00:04:20.727 real 0m0.266s 00:04:20.727 user 0m0.242s 00:04:20.727 sys 0m0.018s 00:04:20.986 17:45:39 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.986 17:45:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.986 ************************************ 00:04:20.986 END TEST env_memory 00:04:20.986 ************************************ 00:04:20.986 17:45:39 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.986 17:45:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.986 17:45:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.986 17:45:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.986 ************************************ 00:04:20.986 START TEST env_vtophys 00:04:20.986 ************************************ 00:04:20.986 17:45:39 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.986 EAL: lib.eal log level changed from notice to debug 00:04:20.986 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 1 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 2 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 3 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 4 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 5 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 6 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 7 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 8 as core 0 on socket 0 00:04:20.986 EAL: Detected lcore 9 as core 0 on socket 0 00:04:20.986 EAL: Maximum logical cores by configuration: 128 00:04:20.986 EAL: Detected CPU lcores: 10 00:04:20.986 EAL: Detected NUMA nodes: 1 00:04:20.986 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.986 EAL: Detected shared linkage of DPDK 00:04:20.986 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.986 EAL: Selected IOVA mode 'PA' 00:04:20.986 EAL: Probing VFIO support... 00:04:20.986 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.986 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:20.986 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.986 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.986 EAL: Setting up physically contiguous memory... 00:04:20.986 EAL: Setting maximum number of open files to 524288 00:04:20.986 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.986 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.986 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.986 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.986 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.986 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.986 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.986 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.986 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.986 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.986 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.986 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.986 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.986 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.986 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.986 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.986 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.986 EAL: Hugepages will be freed exactly as allocated. 00:04:20.986 EAL: No shared files mode enabled, IPC is disabled 00:04:20.986 EAL: No shared files mode enabled, IPC is disabled 00:04:20.986 EAL: TSC frequency is ~2600000 KHz 00:04:20.986 EAL: Main lcore 0 is ready (tid=7fa8252caa40;cpuset=[0]) 00:04:20.986 EAL: Trying to obtain current memory policy. 00:04:20.986 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.986 EAL: Restoring previous memory policy: 0 00:04:20.986 EAL: request: mp_malloc_sync 00:04:20.986 EAL: No shared files mode enabled, IPC is disabled 00:04:20.986 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.986 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.986 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.986 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.986 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:20.986 00:04:20.986 00:04:20.986 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.986 http://cunit.sourceforge.net/ 00:04:20.986 00:04:20.986 00:04:20.986 Suite: components_suite 00:04:21.247 Test: vtophys_malloc_test ...passed 00:04:21.247 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.508 EAL: Restoring previous memory policy: 4 00:04:21.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.508 EAL: request: mp_malloc_sync 00:04:21.508 EAL: No shared files mode enabled, IPC is disabled 00:04:21.508 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.508 EAL: request: mp_malloc_sync 00:04:21.508 EAL: No shared files mode enabled, IPC is disabled 00:04:21.508 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.508 EAL: Trying to obtain current memory policy. 00:04:21.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.508 EAL: Restoring previous memory policy: 4 00:04:21.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.508 EAL: request: mp_malloc_sync 00:04:21.508 EAL: No shared files mode enabled, IPC is disabled 00:04:21.508 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.508 EAL: request: mp_malloc_sync 00:04:21.508 EAL: No shared files mode enabled, IPC is disabled 00:04:21.508 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.508 EAL: Trying to obtain current memory policy. 00:04:21.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.508 EAL: Restoring previous memory policy: 4 00:04:21.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.508 EAL: request: mp_malloc_sync 00:04:21.508 EAL: No shared files mode enabled, IPC is disabled 00:04:21.508 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.508 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.508 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.509 EAL: Trying to obtain current memory policy. 00:04:21.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.509 EAL: Restoring previous memory policy: 4 00:04:21.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.509 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.509 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.509 EAL: Trying to obtain current memory policy. 00:04:21.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.509 EAL: Restoring previous memory policy: 4 00:04:21.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.509 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.509 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.509 EAL: Trying to obtain current memory policy. 00:04:21.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.509 EAL: Restoring previous memory policy: 4 00:04:21.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.509 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.509 EAL: request: mp_malloc_sync 00:04:21.509 EAL: No shared files mode enabled, IPC is disabled 00:04:21.509 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.770 EAL: Trying to obtain current memory policy. 00:04:21.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.770 EAL: Restoring previous memory policy: 4 00:04:21.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.770 EAL: request: mp_malloc_sync 00:04:21.770 EAL: No shared files mode enabled, IPC is disabled 00:04:21.770 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.770 EAL: request: mp_malloc_sync 00:04:21.770 EAL: No shared files mode enabled, IPC is disabled 00:04:21.770 EAL: Heap on socket 0 was shrunk by 130MB 00:04:22.031 EAL: Trying to obtain current memory policy. 00:04:22.031 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.031 EAL: Restoring previous memory policy: 4 00:04:22.031 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.031 EAL: request: mp_malloc_sync 00:04:22.031 EAL: No shared files mode enabled, IPC is disabled 00:04:22.031 EAL: Heap on socket 0 was expanded by 258MB 00:04:22.293 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.293 EAL: request: mp_malloc_sync 00:04:22.293 EAL: No shared files mode enabled, IPC is disabled 00:04:22.293 EAL: Heap on socket 0 was shrunk by 258MB 00:04:22.581 EAL: Trying to obtain current memory policy. 00:04:22.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.581 EAL: Restoring previous memory policy: 4 00:04:22.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.581 EAL: request: mp_malloc_sync 00:04:22.581 EAL: No shared files mode enabled, IPC is disabled 00:04:22.581 EAL: Heap on socket 0 was expanded by 514MB 00:04:23.161 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.420 EAL: request: mp_malloc_sync 00:04:23.420 EAL: No shared files mode enabled, IPC is disabled 00:04:23.420 EAL: Heap on socket 0 was shrunk by 514MB 00:04:23.991 EAL: Trying to obtain current memory policy. 00:04:23.991 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.991 EAL: Restoring previous memory policy: 4 00:04:23.991 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.991 EAL: request: mp_malloc_sync 00:04:23.991 EAL: No shared files mode enabled, IPC is disabled 00:04:23.991 EAL: Heap on socket 0 was expanded by 1026MB 00:04:25.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.374 EAL: request: mp_malloc_sync 00:04:25.374 EAL: No shared files mode enabled, IPC is disabled 00:04:25.374 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:25.941 passed 00:04:25.941 00:04:25.941 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.941 suites 1 1 n/a 0 0 00:04:25.941 tests 2 2 2 0 0 00:04:25.941 asserts 5719 5719 5719 0 n/a 00:04:25.941 00:04:25.941 Elapsed time = 4.892 seconds 00:04:25.941 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.941 EAL: request: mp_malloc_sync 00:04:25.941 EAL: No shared files mode enabled, IPC is disabled 00:04:25.941 EAL: Heap on socket 0 was shrunk by 2MB 00:04:25.941 EAL: No shared files mode enabled, IPC is disabled 00:04:25.941 EAL: No shared files mode enabled, IPC is disabled 00:04:25.941 EAL: No shared files mode enabled, IPC is disabled 00:04:25.941 00:04:25.941 real 0m5.155s 00:04:25.941 user 0m4.361s 00:04:25.941 sys 0m0.647s 00:04:25.941 17:45:44 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.941 ************************************ 00:04:25.941 END TEST env_vtophys 00:04:25.941 ************************************ 00:04:25.941 17:45:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:26.199 17:45:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:26.199 17:45:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.199 17:45:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.199 17:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.199 ************************************ 00:04:26.199 START TEST env_pci 00:04:26.199 ************************************ 00:04:26.199 17:45:44 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:26.199 00:04:26.199 00:04:26.199 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.199 http://cunit.sourceforge.net/ 00:04:26.199 00:04:26.199 00:04:26.199 Suite: pci 00:04:26.200 Test: pci_hook ...[2024-10-25 17:45:44.429663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56950 has claimed it 00:04:26.200 passed 00:04:26.200 00:04:26.200 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.200 suites 1 1 n/a 0 0 00:04:26.200 tests 1 1 1 0 0 00:04:26.200 asserts 25 25 25 0 n/a 00:04:26.200 00:04:26.200 Elapsed time = 0.006 seconds 00:04:26.200 EAL: Cannot find device (10000:00:01.0) 00:04:26.200 EAL: Failed to attach device on primary process 00:04:26.200 00:04:26.200 real 0m0.058s 00:04:26.200 user 0m0.025s 00:04:26.200 sys 0m0.033s 00:04:26.200 17:45:44 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.200 17:45:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:26.200 ************************************ 00:04:26.200 END TEST env_pci 00:04:26.200 ************************************ 00:04:26.200 17:45:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.200 17:45:44 env -- env/env.sh@15 -- # uname 00:04:26.200 17:45:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.200 17:45:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.200 17:45:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.200 17:45:44 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:26.200 17:45:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.200 17:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.200 ************************************ 00:04:26.200 START TEST env_dpdk_post_init 00:04:26.200 ************************************ 00:04:26.200 17:45:44 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.200 EAL: Detected CPU lcores: 10 00:04:26.200 EAL: Detected NUMA nodes: 1 00:04:26.200 EAL: Detected shared linkage of DPDK 00:04:26.200 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.200 EAL: Selected IOVA mode 'PA' 00:04:26.459 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.459 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:26.459 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:26.459 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:26.459 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:26.459 Starting DPDK initialization... 00:04:26.459 Starting SPDK post initialization... 00:04:26.459 SPDK NVMe probe 00:04:26.459 Attaching to 0000:00:10.0 00:04:26.459 Attaching to 0000:00:11.0 00:04:26.459 Attaching to 0000:00:12.0 00:04:26.459 Attaching to 0000:00:13.0 00:04:26.459 Attached to 0000:00:10.0 00:04:26.459 Attached to 0000:00:11.0 00:04:26.459 Attached to 0000:00:13.0 00:04:26.459 Attached to 0000:00:12.0 00:04:26.459 Cleaning up... 00:04:26.459 00:04:26.459 real 0m0.235s 00:04:26.459 user 0m0.075s 00:04:26.459 sys 0m0.061s 00:04:26.459 17:45:44 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.459 17:45:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.459 ************************************ 00:04:26.459 END TEST env_dpdk_post_init 00:04:26.459 ************************************ 00:04:26.459 17:45:44 env -- env/env.sh@26 -- # uname 00:04:26.459 17:45:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.459 17:45:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.459 17:45:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.459 17:45:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.459 17:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.459 ************************************ 00:04:26.459 START TEST env_mem_callbacks 00:04:26.459 ************************************ 00:04:26.459 17:45:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.459 EAL: Detected CPU lcores: 10 00:04:26.459 EAL: Detected NUMA nodes: 1 00:04:26.459 EAL: Detected shared linkage of DPDK 00:04:26.459 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.459 EAL: Selected IOVA mode 'PA' 00:04:26.717 00:04:26.717 00:04:26.717 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.717 http://cunit.sourceforge.net/ 00:04:26.717 00:04:26.717 00:04:26.717 Suite: memory 00:04:26.718 Test: test ... 00:04:26.718 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.718 register 0x200000200000 2097152 00:04:26.718 malloc 3145728 00:04:26.718 register 0x200000400000 4194304 00:04:26.718 buf 0x2000004fffc0 len 3145728 PASSED 00:04:26.718 malloc 64 00:04:26.718 buf 0x2000004ffec0 len 64 PASSED 00:04:26.718 malloc 4194304 00:04:26.718 register 0x200000800000 6291456 00:04:26.718 buf 0x2000009fffc0 len 4194304 PASSED 00:04:26.718 free 0x2000004fffc0 3145728 00:04:26.718 free 0x2000004ffec0 64 00:04:26.718 unregister 0x200000400000 4194304 PASSED 00:04:26.718 free 0x2000009fffc0 4194304 00:04:26.718 unregister 0x200000800000 6291456 PASSED 00:04:26.718 malloc 8388608 00:04:26.718 register 0x200000400000 10485760 00:04:26.718 buf 0x2000005fffc0 len 8388608 PASSED 00:04:26.718 free 0x2000005fffc0 8388608 00:04:26.718 unregister 0x200000400000 10485760 PASSED 00:04:26.718 passed 00:04:26.718 00:04:26.718 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.718 suites 1 1 n/a 0 0 00:04:26.718 tests 1 1 1 0 0 00:04:26.718 asserts 15 15 15 0 n/a 00:04:26.718 00:04:26.718 Elapsed time = 0.033 seconds 00:04:26.718 00:04:26.718 real 0m0.195s 00:04:26.718 user 0m0.048s 00:04:26.718 sys 0m0.044s 00:04:26.718 17:45:44 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.718 ************************************ 00:04:26.718 END TEST env_mem_callbacks 00:04:26.718 ************************************ 00:04:26.718 17:45:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.718 00:04:26.718 real 0m6.280s 00:04:26.718 user 0m4.900s 00:04:26.718 sys 0m0.993s 00:04:26.718 17:45:45 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.718 ************************************ 00:04:26.718 END TEST env 00:04:26.718 ************************************ 00:04:26.718 17:45:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.718 17:45:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:26.718 17:45:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.718 17:45:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.718 17:45:45 -- common/autotest_common.sh@10 -- # set +x 00:04:26.718 ************************************ 00:04:26.718 START TEST rpc 00:04:26.718 ************************************ 00:04:26.718 17:45:45 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:26.718 * Looking for test storage... 00:04:26.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.718 17:45:45 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:26.718 17:45:45 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:26.718 17:45:45 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:26.976 17:45:45 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.976 17:45:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.976 17:45:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.976 17:45:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.976 17:45:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.976 17:45:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.976 17:45:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.976 17:45:45 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.976 17:45:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.976 17:45:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.976 17:45:45 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.976 17:45:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.976 17:45:45 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.976 17:45:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.976 17:45:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.976 17:45:45 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.976 17:45:45 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.977 --rc genhtml_branch_coverage=1 00:04:26.977 --rc genhtml_function_coverage=1 00:04:26.977 --rc genhtml_legend=1 00:04:26.977 --rc geninfo_all_blocks=1 00:04:26.977 --rc geninfo_unexecuted_blocks=1 00:04:26.977 00:04:26.977 ' 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.977 --rc genhtml_branch_coverage=1 00:04:26.977 --rc genhtml_function_coverage=1 00:04:26.977 --rc genhtml_legend=1 00:04:26.977 --rc geninfo_all_blocks=1 00:04:26.977 --rc geninfo_unexecuted_blocks=1 00:04:26.977 00:04:26.977 ' 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.977 --rc genhtml_branch_coverage=1 00:04:26.977 --rc genhtml_function_coverage=1 00:04:26.977 --rc genhtml_legend=1 00:04:26.977 --rc geninfo_all_blocks=1 00:04:26.977 --rc geninfo_unexecuted_blocks=1 00:04:26.977 00:04:26.977 ' 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.977 --rc genhtml_branch_coverage=1 00:04:26.977 --rc genhtml_function_coverage=1 00:04:26.977 --rc genhtml_legend=1 00:04:26.977 --rc geninfo_all_blocks=1 00:04:26.977 --rc geninfo_unexecuted_blocks=1 00:04:26.977 00:04:26.977 ' 00:04:26.977 17:45:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57077 00:04:26.977 17:45:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.977 17:45:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:26.977 17:45:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57077 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@831 -- # '[' -z 57077 ']' 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.977 17:45:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.977 [2024-10-25 17:45:45.286189] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:26.977 [2024-10-25 17:45:45.286311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57077 ] 00:04:27.236 [2024-10-25 17:45:45.445732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.236 [2024-10-25 17:45:45.543050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.236 [2024-10-25 17:45:45.543108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57077' to capture a snapshot of events at runtime. 00:04:27.236 [2024-10-25 17:45:45.543118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.236 [2024-10-25 17:45:45.543127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.236 [2024-10-25 17:45:45.543134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57077 for offline analysis/debug. 00:04:27.236 [2024-10-25 17:45:45.543987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.803 17:45:46 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.803 17:45:46 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:27.803 17:45:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.803 17:45:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.803 17:45:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.803 17:45:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.803 17:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.803 17:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.803 17:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.803 ************************************ 00:04:27.803 START TEST rpc_integrity 00:04:27.803 ************************************ 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.803 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.803 { 00:04:27.803 "name": "Malloc0", 00:04:27.803 "aliases": [ 00:04:27.803 "e7615018-917e-4246-acc1-ab53aa389e59" 00:04:27.803 ], 00:04:27.803 "product_name": "Malloc disk", 00:04:27.803 "block_size": 512, 00:04:27.803 "num_blocks": 16384, 00:04:27.803 "uuid": "e7615018-917e-4246-acc1-ab53aa389e59", 00:04:27.803 "assigned_rate_limits": { 00:04:27.803 "rw_ios_per_sec": 0, 00:04:27.803 "rw_mbytes_per_sec": 0, 00:04:27.803 "r_mbytes_per_sec": 0, 00:04:27.803 "w_mbytes_per_sec": 0 00:04:27.803 }, 00:04:27.803 "claimed": false, 00:04:27.803 "zoned": false, 00:04:27.803 "supported_io_types": { 00:04:27.803 "read": true, 00:04:27.803 "write": true, 00:04:27.803 "unmap": true, 00:04:27.803 "flush": true, 00:04:27.803 "reset": true, 00:04:27.803 "nvme_admin": false, 00:04:27.803 "nvme_io": false, 00:04:27.803 "nvme_io_md": false, 00:04:27.803 "write_zeroes": true, 00:04:27.803 "zcopy": true, 00:04:27.803 "get_zone_info": false, 00:04:27.803 "zone_management": false, 00:04:27.803 "zone_append": false, 00:04:27.803 "compare": false, 00:04:27.803 "compare_and_write": false, 00:04:27.803 "abort": true, 00:04:27.803 "seek_hole": false, 00:04:27.803 "seek_data": false, 00:04:27.803 "copy": true, 00:04:27.803 "nvme_iov_md": false 00:04:27.803 }, 00:04:27.803 "memory_domains": [ 00:04:27.803 { 00:04:27.803 "dma_device_id": "system", 00:04:27.803 "dma_device_type": 1 00:04:27.803 }, 00:04:27.803 { 00:04:27.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.803 "dma_device_type": 2 00:04:27.803 } 00:04:27.803 ], 00:04:27.803 "driver_specific": {} 00:04:27.803 } 00:04:27.803 ]' 00:04:27.803 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.061 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.061 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.061 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.061 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.061 [2024-10-25 17:45:46.243303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.061 [2024-10-25 17:45:46.243358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.061 [2024-10-25 17:45:46.243384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:28.061 [2024-10-25 17:45:46.243395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.061 [2024-10-25 17:45:46.245538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.061 [2024-10-25 17:45:46.245588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.061 Passthru0 00:04:28.061 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.061 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.061 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.061 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.061 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.061 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.061 { 00:04:28.061 "name": "Malloc0", 00:04:28.061 "aliases": [ 00:04:28.061 "e7615018-917e-4246-acc1-ab53aa389e59" 00:04:28.061 ], 00:04:28.061 "product_name": "Malloc disk", 00:04:28.061 "block_size": 512, 00:04:28.061 "num_blocks": 16384, 00:04:28.061 "uuid": "e7615018-917e-4246-acc1-ab53aa389e59", 00:04:28.061 "assigned_rate_limits": { 00:04:28.061 "rw_ios_per_sec": 0, 00:04:28.061 "rw_mbytes_per_sec": 0, 00:04:28.061 "r_mbytes_per_sec": 0, 00:04:28.061 "w_mbytes_per_sec": 0 00:04:28.061 }, 00:04:28.061 "claimed": true, 00:04:28.061 "claim_type": "exclusive_write", 00:04:28.061 "zoned": false, 00:04:28.061 "supported_io_types": { 00:04:28.061 "read": true, 00:04:28.061 "write": true, 00:04:28.061 "unmap": true, 00:04:28.061 "flush": true, 00:04:28.061 "reset": true, 00:04:28.061 "nvme_admin": false, 00:04:28.061 "nvme_io": false, 00:04:28.061 "nvme_io_md": false, 00:04:28.061 "write_zeroes": true, 00:04:28.061 "zcopy": true, 00:04:28.061 "get_zone_info": false, 00:04:28.061 "zone_management": false, 00:04:28.061 "zone_append": false, 00:04:28.061 "compare": false, 00:04:28.061 "compare_and_write": false, 00:04:28.061 "abort": true, 00:04:28.061 "seek_hole": false, 00:04:28.061 "seek_data": false, 00:04:28.061 "copy": true, 00:04:28.061 "nvme_iov_md": false 00:04:28.061 }, 00:04:28.061 "memory_domains": [ 00:04:28.061 { 00:04:28.061 "dma_device_id": "system", 00:04:28.061 "dma_device_type": 1 00:04:28.061 }, 00:04:28.061 { 00:04:28.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.061 "dma_device_type": 2 00:04:28.061 } 00:04:28.061 ], 00:04:28.061 "driver_specific": {} 00:04:28.061 }, 00:04:28.061 { 00:04:28.061 "name": "Passthru0", 00:04:28.061 "aliases": [ 00:04:28.061 "08c765a6-5e26-5576-9c6c-07f7488a98f8" 00:04:28.061 ], 00:04:28.061 "product_name": "passthru", 00:04:28.061 "block_size": 512, 00:04:28.061 "num_blocks": 16384, 00:04:28.061 "uuid": "08c765a6-5e26-5576-9c6c-07f7488a98f8", 00:04:28.061 "assigned_rate_limits": { 00:04:28.061 "rw_ios_per_sec": 0, 00:04:28.061 "rw_mbytes_per_sec": 0, 00:04:28.061 "r_mbytes_per_sec": 0, 00:04:28.061 "w_mbytes_per_sec": 0 00:04:28.061 }, 00:04:28.061 "claimed": false, 00:04:28.061 "zoned": false, 00:04:28.061 "supported_io_types": { 00:04:28.061 "read": true, 00:04:28.061 "write": true, 00:04:28.061 "unmap": true, 00:04:28.061 "flush": true, 00:04:28.061 "reset": true, 00:04:28.061 "nvme_admin": false, 00:04:28.061 "nvme_io": false, 00:04:28.061 "nvme_io_md": false, 00:04:28.061 "write_zeroes": true, 00:04:28.061 "zcopy": true, 00:04:28.061 "get_zone_info": false, 00:04:28.061 "zone_management": false, 00:04:28.061 "zone_append": false, 00:04:28.061 "compare": false, 00:04:28.061 "compare_and_write": false, 00:04:28.061 "abort": true, 00:04:28.061 "seek_hole": false, 00:04:28.061 "seek_data": false, 00:04:28.061 "copy": true, 00:04:28.061 "nvme_iov_md": false 00:04:28.061 }, 00:04:28.061 "memory_domains": [ 00:04:28.061 { 00:04:28.061 "dma_device_id": "system", 00:04:28.061 "dma_device_type": 1 00:04:28.061 }, 00:04:28.061 { 00:04:28.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.062 "dma_device_type": 2 00:04:28.062 } 00:04:28.062 ], 00:04:28.062 "driver_specific": { 00:04:28.062 "passthru": { 00:04:28.062 "name": "Passthru0", 00:04:28.062 "base_bdev_name": "Malloc0" 00:04:28.062 } 00:04:28.062 } 00:04:28.062 } 00:04:28.062 ]' 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.062 17:45:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.062 00:04:28.062 real 0m0.236s 00:04:28.062 user 0m0.124s 00:04:28.062 sys 0m0.029s 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 ************************************ 00:04:28.062 END TEST rpc_integrity 00:04:28.062 ************************************ 00:04:28.062 17:45:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.062 17:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.062 17:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.062 17:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 ************************************ 00:04:28.062 START TEST rpc_plugins 00:04:28.062 ************************************ 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.062 { 00:04:28.062 "name": "Malloc1", 00:04:28.062 "aliases": [ 00:04:28.062 "8f3e0a19-0072-438a-b437-2e875b2007ee" 00:04:28.062 ], 00:04:28.062 "product_name": "Malloc disk", 00:04:28.062 "block_size": 4096, 00:04:28.062 "num_blocks": 256, 00:04:28.062 "uuid": "8f3e0a19-0072-438a-b437-2e875b2007ee", 00:04:28.062 "assigned_rate_limits": { 00:04:28.062 "rw_ios_per_sec": 0, 00:04:28.062 "rw_mbytes_per_sec": 0, 00:04:28.062 "r_mbytes_per_sec": 0, 00:04:28.062 "w_mbytes_per_sec": 0 00:04:28.062 }, 00:04:28.062 "claimed": false, 00:04:28.062 "zoned": false, 00:04:28.062 "supported_io_types": { 00:04:28.062 "read": true, 00:04:28.062 "write": true, 00:04:28.062 "unmap": true, 00:04:28.062 "flush": true, 00:04:28.062 "reset": true, 00:04:28.062 "nvme_admin": false, 00:04:28.062 "nvme_io": false, 00:04:28.062 "nvme_io_md": false, 00:04:28.062 "write_zeroes": true, 00:04:28.062 "zcopy": true, 00:04:28.062 "get_zone_info": false, 00:04:28.062 "zone_management": false, 00:04:28.062 "zone_append": false, 00:04:28.062 "compare": false, 00:04:28.062 "compare_and_write": false, 00:04:28.062 "abort": true, 00:04:28.062 "seek_hole": false, 00:04:28.062 "seek_data": false, 00:04:28.062 "copy": true, 00:04:28.062 "nvme_iov_md": false 00:04:28.062 }, 00:04:28.062 "memory_domains": [ 00:04:28.062 { 00:04:28.062 "dma_device_id": "system", 00:04:28.062 "dma_device_type": 1 00:04:28.062 }, 00:04:28.062 { 00:04:28.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.062 "dma_device_type": 2 00:04:28.062 } 00:04:28.062 ], 00:04:28.062 "driver_specific": {} 00:04:28.062 } 00:04:28.062 ]' 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.062 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.062 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:28.321 17:45:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.321 00:04:28.321 real 0m0.111s 00:04:28.321 user 0m0.060s 00:04:28.321 sys 0m0.017s 00:04:28.321 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.321 17:45:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.321 ************************************ 00:04:28.321 END TEST rpc_plugins 00:04:28.321 ************************************ 00:04:28.321 17:45:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:28.321 17:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.321 17:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.321 17:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.321 ************************************ 00:04:28.321 START TEST rpc_trace_cmd_test 00:04:28.321 ************************************ 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:28.321 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57077", 00:04:28.321 "tpoint_group_mask": "0x8", 00:04:28.321 "iscsi_conn": { 00:04:28.321 "mask": "0x2", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "scsi": { 00:04:28.321 "mask": "0x4", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "bdev": { 00:04:28.321 "mask": "0x8", 00:04:28.321 "tpoint_mask": "0xffffffffffffffff" 00:04:28.321 }, 00:04:28.321 "nvmf_rdma": { 00:04:28.321 "mask": "0x10", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "nvmf_tcp": { 00:04:28.321 "mask": "0x20", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "ftl": { 00:04:28.321 "mask": "0x40", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "blobfs": { 00:04:28.321 "mask": "0x80", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "dsa": { 00:04:28.321 "mask": "0x200", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "thread": { 00:04:28.321 "mask": "0x400", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "nvme_pcie": { 00:04:28.321 "mask": "0x800", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "iaa": { 00:04:28.321 "mask": "0x1000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "nvme_tcp": { 00:04:28.321 "mask": "0x2000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "bdev_nvme": { 00:04:28.321 "mask": "0x4000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "sock": { 00:04:28.321 "mask": "0x8000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "blob": { 00:04:28.321 "mask": "0x10000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "bdev_raid": { 00:04:28.321 "mask": "0x20000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 }, 00:04:28.321 "scheduler": { 00:04:28.321 "mask": "0x40000", 00:04:28.321 "tpoint_mask": "0x0" 00:04:28.321 } 00:04:28.321 }' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.321 00:04:28.321 real 0m0.174s 00:04:28.321 user 0m0.144s 00:04:28.321 sys 0m0.022s 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.321 17:45:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.321 ************************************ 00:04:28.321 END TEST rpc_trace_cmd_test 00:04:28.321 ************************************ 00:04:28.580 17:45:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.580 17:45:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.580 17:45:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.580 17:45:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.580 17:45:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.580 17:45:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.580 ************************************ 00:04:28.580 START TEST rpc_daemon_integrity 00:04:28.580 ************************************ 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.580 { 00:04:28.580 "name": "Malloc2", 00:04:28.580 "aliases": [ 00:04:28.580 "c33f51a3-8ec6-4441-85f0-bf0e37c9e9cc" 00:04:28.580 ], 00:04:28.580 "product_name": "Malloc disk", 00:04:28.580 "block_size": 512, 00:04:28.580 "num_blocks": 16384, 00:04:28.580 "uuid": "c33f51a3-8ec6-4441-85f0-bf0e37c9e9cc", 00:04:28.580 "assigned_rate_limits": { 00:04:28.580 "rw_ios_per_sec": 0, 00:04:28.580 "rw_mbytes_per_sec": 0, 00:04:28.580 "r_mbytes_per_sec": 0, 00:04:28.580 "w_mbytes_per_sec": 0 00:04:28.580 }, 00:04:28.580 "claimed": false, 00:04:28.580 "zoned": false, 00:04:28.580 "supported_io_types": { 00:04:28.580 "read": true, 00:04:28.580 "write": true, 00:04:28.580 "unmap": true, 00:04:28.580 "flush": true, 00:04:28.580 "reset": true, 00:04:28.580 "nvme_admin": false, 00:04:28.580 "nvme_io": false, 00:04:28.580 "nvme_io_md": false, 00:04:28.580 "write_zeroes": true, 00:04:28.580 "zcopy": true, 00:04:28.580 "get_zone_info": false, 00:04:28.580 "zone_management": false, 00:04:28.580 "zone_append": false, 00:04:28.580 "compare": false, 00:04:28.580 "compare_and_write": false, 00:04:28.580 "abort": true, 00:04:28.580 "seek_hole": false, 00:04:28.580 "seek_data": false, 00:04:28.580 "copy": true, 00:04:28.580 "nvme_iov_md": false 00:04:28.580 }, 00:04:28.580 "memory_domains": [ 00:04:28.580 { 00:04:28.580 "dma_device_id": "system", 00:04:28.580 "dma_device_type": 1 00:04:28.580 }, 00:04:28.580 { 00:04:28.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.580 "dma_device_type": 2 00:04:28.580 } 00:04:28.580 ], 00:04:28.580 "driver_specific": {} 00:04:28.580 } 00:04:28.580 ]' 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.580 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.581 [2024-10-25 17:45:46.865751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.581 [2024-10-25 17:45:46.865812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.581 [2024-10-25 17:45:46.865830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:28.581 [2024-10-25 17:45:46.865841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.581 [2024-10-25 17:45:46.867982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.581 [2024-10-25 17:45:46.868018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.581 Passthru0 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.581 { 00:04:28.581 "name": "Malloc2", 00:04:28.581 "aliases": [ 00:04:28.581 "c33f51a3-8ec6-4441-85f0-bf0e37c9e9cc" 00:04:28.581 ], 00:04:28.581 "product_name": "Malloc disk", 00:04:28.581 "block_size": 512, 00:04:28.581 "num_blocks": 16384, 00:04:28.581 "uuid": "c33f51a3-8ec6-4441-85f0-bf0e37c9e9cc", 00:04:28.581 "assigned_rate_limits": { 00:04:28.581 "rw_ios_per_sec": 0, 00:04:28.581 "rw_mbytes_per_sec": 0, 00:04:28.581 "r_mbytes_per_sec": 0, 00:04:28.581 "w_mbytes_per_sec": 0 00:04:28.581 }, 00:04:28.581 "claimed": true, 00:04:28.581 "claim_type": "exclusive_write", 00:04:28.581 "zoned": false, 00:04:28.581 "supported_io_types": { 00:04:28.581 "read": true, 00:04:28.581 "write": true, 00:04:28.581 "unmap": true, 00:04:28.581 "flush": true, 00:04:28.581 "reset": true, 00:04:28.581 "nvme_admin": false, 00:04:28.581 "nvme_io": false, 00:04:28.581 "nvme_io_md": false, 00:04:28.581 "write_zeroes": true, 00:04:28.581 "zcopy": true, 00:04:28.581 "get_zone_info": false, 00:04:28.581 "zone_management": false, 00:04:28.581 "zone_append": false, 00:04:28.581 "compare": false, 00:04:28.581 "compare_and_write": false, 00:04:28.581 "abort": true, 00:04:28.581 "seek_hole": false, 00:04:28.581 "seek_data": false, 00:04:28.581 "copy": true, 00:04:28.581 "nvme_iov_md": false 00:04:28.581 }, 00:04:28.581 "memory_domains": [ 00:04:28.581 { 00:04:28.581 "dma_device_id": "system", 00:04:28.581 "dma_device_type": 1 00:04:28.581 }, 00:04:28.581 { 00:04:28.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.581 "dma_device_type": 2 00:04:28.581 } 00:04:28.581 ], 00:04:28.581 "driver_specific": {} 00:04:28.581 }, 00:04:28.581 { 00:04:28.581 "name": "Passthru0", 00:04:28.581 "aliases": [ 00:04:28.581 "6acf2e4e-5996-59f7-b941-723d99095b4e" 00:04:28.581 ], 00:04:28.581 "product_name": "passthru", 00:04:28.581 "block_size": 512, 00:04:28.581 "num_blocks": 16384, 00:04:28.581 "uuid": "6acf2e4e-5996-59f7-b941-723d99095b4e", 00:04:28.581 "assigned_rate_limits": { 00:04:28.581 "rw_ios_per_sec": 0, 00:04:28.581 "rw_mbytes_per_sec": 0, 00:04:28.581 "r_mbytes_per_sec": 0, 00:04:28.581 "w_mbytes_per_sec": 0 00:04:28.581 }, 00:04:28.581 "claimed": false, 00:04:28.581 "zoned": false, 00:04:28.581 "supported_io_types": { 00:04:28.581 "read": true, 00:04:28.581 "write": true, 00:04:28.581 "unmap": true, 00:04:28.581 "flush": true, 00:04:28.581 "reset": true, 00:04:28.581 "nvme_admin": false, 00:04:28.581 "nvme_io": false, 00:04:28.581 "nvme_io_md": false, 00:04:28.581 "write_zeroes": true, 00:04:28.581 "zcopy": true, 00:04:28.581 "get_zone_info": false, 00:04:28.581 "zone_management": false, 00:04:28.581 "zone_append": false, 00:04:28.581 "compare": false, 00:04:28.581 "compare_and_write": false, 00:04:28.581 "abort": true, 00:04:28.581 "seek_hole": false, 00:04:28.581 "seek_data": false, 00:04:28.581 "copy": true, 00:04:28.581 "nvme_iov_md": false 00:04:28.581 }, 00:04:28.581 "memory_domains": [ 00:04:28.581 { 00:04:28.581 "dma_device_id": "system", 00:04:28.581 "dma_device_type": 1 00:04:28.581 }, 00:04:28.581 { 00:04:28.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.581 "dma_device_type": 2 00:04:28.581 } 00:04:28.581 ], 00:04:28.581 "driver_specific": { 00:04:28.581 "passthru": { 00:04:28.581 "name": "Passthru0", 00:04:28.581 "base_bdev_name": "Malloc2" 00:04:28.581 } 00:04:28.581 } 00:04:28.581 } 00:04:28.581 ]' 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.581 00:04:28.581 real 0m0.228s 00:04:28.581 user 0m0.127s 00:04:28.581 sys 0m0.028s 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.581 17:45:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.581 ************************************ 00:04:28.581 END TEST rpc_daemon_integrity 00:04:28.581 ************************************ 00:04:28.841 17:45:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.841 17:45:47 rpc -- rpc/rpc.sh@84 -- # killprocess 57077 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@950 -- # '[' -z 57077 ']' 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@954 -- # kill -0 57077 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@955 -- # uname 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57077 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.841 killing process with pid 57077 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57077' 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@969 -- # kill 57077 00:04:28.841 17:45:47 rpc -- common/autotest_common.sh@974 -- # wait 57077 00:04:30.214 00:04:30.214 real 0m3.210s 00:04:30.214 user 0m3.664s 00:04:30.214 sys 0m0.550s 00:04:30.214 17:45:48 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.214 ************************************ 00:04:30.214 END TEST rpc 00:04:30.214 17:45:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.214 ************************************ 00:04:30.214 17:45:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:30.214 17:45:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.214 17:45:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.214 17:45:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.214 ************************************ 00:04:30.214 START TEST skip_rpc 00:04:30.214 ************************************ 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:30.214 * Looking for test storage... 00:04:30.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.214 17:45:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.214 --rc genhtml_branch_coverage=1 00:04:30.214 --rc genhtml_function_coverage=1 00:04:30.214 --rc genhtml_legend=1 00:04:30.214 --rc geninfo_all_blocks=1 00:04:30.214 --rc geninfo_unexecuted_blocks=1 00:04:30.214 00:04:30.214 ' 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.214 --rc genhtml_branch_coverage=1 00:04:30.214 --rc genhtml_function_coverage=1 00:04:30.214 --rc genhtml_legend=1 00:04:30.214 --rc geninfo_all_blocks=1 00:04:30.214 --rc geninfo_unexecuted_blocks=1 00:04:30.214 00:04:30.214 ' 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.214 --rc genhtml_branch_coverage=1 00:04:30.214 --rc genhtml_function_coverage=1 00:04:30.214 --rc genhtml_legend=1 00:04:30.214 --rc geninfo_all_blocks=1 00:04:30.214 --rc geninfo_unexecuted_blocks=1 00:04:30.214 00:04:30.214 ' 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.214 --rc genhtml_branch_coverage=1 00:04:30.214 --rc genhtml_function_coverage=1 00:04:30.214 --rc genhtml_legend=1 00:04:30.214 --rc geninfo_all_blocks=1 00:04:30.214 --rc geninfo_unexecuted_blocks=1 00:04:30.214 00:04:30.214 ' 00:04:30.214 17:45:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.214 17:45:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.214 17:45:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.214 17:45:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.214 ************************************ 00:04:30.214 START TEST skip_rpc 00:04:30.214 ************************************ 00:04:30.214 17:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:30.214 17:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57284 00:04:30.214 17:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.214 17:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:30.214 17:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:30.214 [2024-10-25 17:45:48.534080] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:30.214 [2024-10-25 17:45:48.534198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57284 ] 00:04:30.472 [2024-10-25 17:45:48.689916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.472 [2024-10-25 17:45:48.765368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57284 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57284 ']' 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57284 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57284 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57284' 00:04:35.799 killing process with pid 57284 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57284 00:04:35.799 17:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57284 00:04:36.365 00:04:36.365 real 0m6.196s 00:04:36.365 user 0m5.838s 00:04:36.365 sys 0m0.258s 00:04:36.365 17:45:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.365 17:45:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.365 ************************************ 00:04:36.365 END TEST skip_rpc 00:04:36.365 ************************************ 00:04:36.365 17:45:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:36.365 17:45:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.365 17:45:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.365 17:45:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.365 ************************************ 00:04:36.365 START TEST skip_rpc_with_json 00:04:36.365 ************************************ 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57377 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57377 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57377 ']' 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.365 17:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.365 [2024-10-25 17:45:54.779576] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:36.365 [2024-10-25 17:45:54.779672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57377 ] 00:04:36.621 [2024-10-25 17:45:54.929451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.621 [2024-10-25 17:45:55.006257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.186 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.186 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:37.186 17:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:37.186 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.186 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.444 [2024-10-25 17:45:55.624001] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:37.445 request: 00:04:37.445 { 00:04:37.445 "trtype": "tcp", 00:04:37.445 "method": "nvmf_get_transports", 00:04:37.445 "req_id": 1 00:04:37.445 } 00:04:37.445 Got JSON-RPC error response 00:04:37.445 response: 00:04:37.445 { 00:04:37.445 "code": -19, 00:04:37.445 "message": "No such device" 00:04:37.445 } 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.445 [2024-10-25 17:45:55.636106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.445 17:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.445 { 00:04:37.445 "subsystems": [ 00:04:37.445 { 00:04:37.445 "subsystem": "fsdev", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "fsdev_set_opts", 00:04:37.445 "params": { 00:04:37.445 "fsdev_io_pool_size": 65535, 00:04:37.445 "fsdev_io_cache_size": 256 00:04:37.445 } 00:04:37.445 } 00:04:37.445 ] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "keyring", 00:04:37.445 "config": [] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "iobuf", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "iobuf_set_options", 00:04:37.445 "params": { 00:04:37.445 "small_pool_count": 8192, 00:04:37.445 "large_pool_count": 1024, 00:04:37.445 "small_bufsize": 8192, 00:04:37.445 "large_bufsize": 135168, 00:04:37.445 "enable_numa": false 00:04:37.445 } 00:04:37.445 } 00:04:37.445 ] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "sock", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "sock_set_default_impl", 00:04:37.445 "params": { 00:04:37.445 "impl_name": "posix" 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "sock_impl_set_options", 00:04:37.445 "params": { 00:04:37.445 "impl_name": "ssl", 00:04:37.445 "recv_buf_size": 4096, 00:04:37.445 "send_buf_size": 4096, 00:04:37.445 "enable_recv_pipe": true, 00:04:37.445 "enable_quickack": false, 00:04:37.445 "enable_placement_id": 0, 00:04:37.445 "enable_zerocopy_send_server": true, 00:04:37.445 "enable_zerocopy_send_client": false, 00:04:37.445 "zerocopy_threshold": 0, 00:04:37.445 "tls_version": 0, 00:04:37.445 "enable_ktls": false 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "sock_impl_set_options", 00:04:37.445 "params": { 00:04:37.445 "impl_name": "posix", 00:04:37.445 "recv_buf_size": 2097152, 00:04:37.445 "send_buf_size": 2097152, 00:04:37.445 "enable_recv_pipe": true, 00:04:37.445 "enable_quickack": false, 00:04:37.445 "enable_placement_id": 0, 00:04:37.445 "enable_zerocopy_send_server": true, 00:04:37.445 "enable_zerocopy_send_client": false, 00:04:37.445 "zerocopy_threshold": 0, 00:04:37.445 "tls_version": 0, 00:04:37.445 "enable_ktls": false 00:04:37.445 } 00:04:37.445 } 00:04:37.445 ] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "vmd", 00:04:37.445 "config": [] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "accel", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "accel_set_options", 00:04:37.445 "params": { 00:04:37.445 "small_cache_size": 128, 00:04:37.445 "large_cache_size": 16, 00:04:37.445 "task_count": 2048, 00:04:37.445 "sequence_count": 2048, 00:04:37.445 "buf_count": 2048 00:04:37.445 } 00:04:37.445 } 00:04:37.445 ] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "bdev", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "bdev_set_options", 00:04:37.445 "params": { 00:04:37.445 "bdev_io_pool_size": 65535, 00:04:37.445 "bdev_io_cache_size": 256, 00:04:37.445 "bdev_auto_examine": true, 00:04:37.445 "iobuf_small_cache_size": 128, 00:04:37.445 "iobuf_large_cache_size": 16 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "bdev_raid_set_options", 00:04:37.445 "params": { 00:04:37.445 "process_window_size_kb": 1024, 00:04:37.445 "process_max_bandwidth_mb_sec": 0 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "bdev_iscsi_set_options", 00:04:37.445 "params": { 00:04:37.445 "timeout_sec": 30 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "bdev_nvme_set_options", 00:04:37.445 "params": { 00:04:37.445 "action_on_timeout": "none", 00:04:37.445 "timeout_us": 0, 00:04:37.445 "timeout_admin_us": 0, 00:04:37.445 "keep_alive_timeout_ms": 10000, 00:04:37.445 "arbitration_burst": 0, 00:04:37.445 "low_priority_weight": 0, 00:04:37.445 "medium_priority_weight": 0, 00:04:37.445 "high_priority_weight": 0, 00:04:37.445 "nvme_adminq_poll_period_us": 10000, 00:04:37.445 "nvme_ioq_poll_period_us": 0, 00:04:37.445 "io_queue_requests": 0, 00:04:37.445 "delay_cmd_submit": true, 00:04:37.445 "transport_retry_count": 4, 00:04:37.445 "bdev_retry_count": 3, 00:04:37.445 "transport_ack_timeout": 0, 00:04:37.445 "ctrlr_loss_timeout_sec": 0, 00:04:37.445 "reconnect_delay_sec": 0, 00:04:37.445 "fast_io_fail_timeout_sec": 0, 00:04:37.445 "disable_auto_failback": false, 00:04:37.445 "generate_uuids": false, 00:04:37.445 "transport_tos": 0, 00:04:37.445 "nvme_error_stat": false, 00:04:37.445 "rdma_srq_size": 0, 00:04:37.445 "io_path_stat": false, 00:04:37.445 "allow_accel_sequence": false, 00:04:37.445 "rdma_max_cq_size": 0, 00:04:37.445 "rdma_cm_event_timeout_ms": 0, 00:04:37.445 "dhchap_digests": [ 00:04:37.445 "sha256", 00:04:37.445 "sha384", 00:04:37.445 "sha512" 00:04:37.445 ], 00:04:37.445 "dhchap_dhgroups": [ 00:04:37.445 "null", 00:04:37.445 "ffdhe2048", 00:04:37.445 "ffdhe3072", 00:04:37.445 "ffdhe4096", 00:04:37.445 "ffdhe6144", 00:04:37.445 "ffdhe8192" 00:04:37.445 ] 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "bdev_nvme_set_hotplug", 00:04:37.445 "params": { 00:04:37.445 "period_us": 100000, 00:04:37.445 "enable": false 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "bdev_wait_for_examine" 00:04:37.445 } 00:04:37.445 ] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "scsi", 00:04:37.445 "config": null 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "scheduler", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "framework_set_scheduler", 00:04:37.445 "params": { 00:04:37.445 "name": "static" 00:04:37.445 } 00:04:37.445 } 00:04:37.445 ] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "vhost_scsi", 00:04:37.445 "config": [] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "vhost_blk", 00:04:37.445 "config": [] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "ublk", 00:04:37.445 "config": [] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "nbd", 00:04:37.445 "config": [] 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "subsystem": "nvmf", 00:04:37.445 "config": [ 00:04:37.445 { 00:04:37.445 "method": "nvmf_set_config", 00:04:37.445 "params": { 00:04:37.445 "discovery_filter": "match_any", 00:04:37.445 "admin_cmd_passthru": { 00:04:37.445 "identify_ctrlr": false 00:04:37.445 }, 00:04:37.445 "dhchap_digests": [ 00:04:37.445 "sha256", 00:04:37.445 "sha384", 00:04:37.445 "sha512" 00:04:37.445 ], 00:04:37.445 "dhchap_dhgroups": [ 00:04:37.445 "null", 00:04:37.445 "ffdhe2048", 00:04:37.445 "ffdhe3072", 00:04:37.445 "ffdhe4096", 00:04:37.445 "ffdhe6144", 00:04:37.445 "ffdhe8192" 00:04:37.445 ] 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "nvmf_set_max_subsystems", 00:04:37.445 "params": { 00:04:37.445 "max_subsystems": 1024 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "nvmf_set_crdt", 00:04:37.445 "params": { 00:04:37.445 "crdt1": 0, 00:04:37.445 "crdt2": 0, 00:04:37.445 "crdt3": 0 00:04:37.445 } 00:04:37.445 }, 00:04:37.445 { 00:04:37.445 "method": "nvmf_create_transport", 00:04:37.445 "params": { 00:04:37.445 "trtype": "TCP", 00:04:37.445 "max_queue_depth": 128, 00:04:37.445 "max_io_qpairs_per_ctrlr": 127, 00:04:37.445 "in_capsule_data_size": 4096, 00:04:37.445 "max_io_size": 131072, 00:04:37.445 "io_unit_size": 131072, 00:04:37.445 "max_aq_depth": 128, 00:04:37.445 "num_shared_buffers": 511, 00:04:37.445 "buf_cache_size": 4294967295, 00:04:37.445 "dif_insert_or_strip": false, 00:04:37.445 "zcopy": false, 00:04:37.445 "c2h_success": true, 00:04:37.445 "sock_priority": 0, 00:04:37.445 "abort_timeout_sec": 1, 00:04:37.445 "ack_timeout": 0, 00:04:37.445 "data_wr_pool_size": 0 00:04:37.445 } 00:04:37.446 } 00:04:37.446 ] 00:04:37.446 }, 00:04:37.446 { 00:04:37.446 "subsystem": "iscsi", 00:04:37.446 "config": [ 00:04:37.446 { 00:04:37.446 "method": "iscsi_set_options", 00:04:37.446 "params": { 00:04:37.446 "node_base": "iqn.2016-06.io.spdk", 00:04:37.446 "max_sessions": 128, 00:04:37.446 "max_connections_per_session": 2, 00:04:37.446 "max_queue_depth": 64, 00:04:37.446 "default_time2wait": 2, 00:04:37.446 "default_time2retain": 20, 00:04:37.446 "first_burst_length": 8192, 00:04:37.446 "immediate_data": true, 00:04:37.446 "allow_duplicated_isid": false, 00:04:37.446 "error_recovery_level": 0, 00:04:37.446 "nop_timeout": 60, 00:04:37.446 "nop_in_interval": 30, 00:04:37.446 "disable_chap": false, 00:04:37.446 "require_chap": false, 00:04:37.446 "mutual_chap": false, 00:04:37.446 "chap_group": 0, 00:04:37.446 "max_large_datain_per_connection": 64, 00:04:37.446 "max_r2t_per_connection": 4, 00:04:37.446 "pdu_pool_size": 36864, 00:04:37.446 "immediate_data_pool_size": 16384, 00:04:37.446 "data_out_pool_size": 2048 00:04:37.446 } 00:04:37.446 } 00:04:37.446 ] 00:04:37.446 } 00:04:37.446 ] 00:04:37.446 } 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57377 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57377 ']' 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57377 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57377 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.446 killing process with pid 57377 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57377' 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57377 00:04:37.446 17:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57377 00:04:38.819 17:45:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57411 00:04:38.819 17:45:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.819 17:45:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.161 17:46:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57411 00:04:44.161 17:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57411 ']' 00:04:44.161 17:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57411 00:04:44.161 17:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:44.161 17:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.161 17:46:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57411 00:04:44.161 17:46:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.161 killing process with pid 57411 00:04:44.161 17:46:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.161 17:46:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57411' 00:04:44.161 17:46:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57411 00:04:44.161 17:46:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57411 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.096 00:04:45.096 real 0m8.465s 00:04:45.096 user 0m8.135s 00:04:45.096 sys 0m0.554s 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.096 ************************************ 00:04:45.096 END TEST skip_rpc_with_json 00:04:45.096 ************************************ 00:04:45.096 17:46:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:45.096 17:46:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.096 17:46:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.096 17:46:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.096 ************************************ 00:04:45.096 START TEST skip_rpc_with_delay 00:04:45.096 ************************************ 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.096 [2024-10-25 17:46:03.295866] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.096 00:04:45.096 real 0m0.125s 00:04:45.096 user 0m0.060s 00:04:45.096 sys 0m0.063s 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.096 17:46:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:45.096 ************************************ 00:04:45.096 END TEST skip_rpc_with_delay 00:04:45.096 ************************************ 00:04:45.096 17:46:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:45.096 17:46:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:45.096 17:46:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:45.096 17:46:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.096 17:46:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.096 17:46:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.096 ************************************ 00:04:45.096 START TEST exit_on_failed_rpc_init 00:04:45.096 ************************************ 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57534 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57534 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57534 ']' 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.096 17:46:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.096 [2024-10-25 17:46:03.462013] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:45.096 [2024-10-25 17:46:03.462122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57534 ] 00:04:45.354 [2024-10-25 17:46:03.619659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.354 [2024-10-25 17:46:03.716359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:45.921 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.181 [2024-10-25 17:46:04.383924] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:46.181 [2024-10-25 17:46:04.384069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57552 ] 00:04:46.181 [2024-10-25 17:46:04.545849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.442 [2024-10-25 17:46:04.654722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.442 [2024-10-25 17:46:04.654815] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:46.442 [2024-10-25 17:46:04.654829] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:46.442 [2024-10-25 17:46:04.654844] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57534 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57534 ']' 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57534 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.442 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57534 00:04:46.702 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.702 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.702 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57534' 00:04:46.702 killing process with pid 57534 00:04:46.702 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57534 00:04:46.702 17:46:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57534 00:04:48.086 00:04:48.086 real 0m2.917s 00:04:48.086 user 0m3.252s 00:04:48.086 sys 0m0.410s 00:04:48.086 17:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.086 17:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.086 ************************************ 00:04:48.086 END TEST exit_on_failed_rpc_init 00:04:48.086 ************************************ 00:04:48.086 17:46:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.086 00:04:48.086 real 0m18.017s 00:04:48.086 user 0m17.427s 00:04:48.086 sys 0m1.454s 00:04:48.086 17:46:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.086 17:46:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.086 ************************************ 00:04:48.086 END TEST skip_rpc 00:04:48.086 ************************************ 00:04:48.086 17:46:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.086 17:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.086 17:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.086 17:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.086 ************************************ 00:04:48.086 START TEST rpc_client 00:04:48.086 ************************************ 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.086 * Looking for test storage... 00:04:48.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.086 17:46:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.086 --rc genhtml_branch_coverage=1 00:04:48.086 --rc genhtml_function_coverage=1 00:04:48.086 --rc genhtml_legend=1 00:04:48.086 --rc geninfo_all_blocks=1 00:04:48.086 --rc geninfo_unexecuted_blocks=1 00:04:48.086 00:04:48.086 ' 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.086 --rc genhtml_branch_coverage=1 00:04:48.086 --rc genhtml_function_coverage=1 00:04:48.086 --rc genhtml_legend=1 00:04:48.086 --rc geninfo_all_blocks=1 00:04:48.086 --rc geninfo_unexecuted_blocks=1 00:04:48.086 00:04:48.086 ' 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.086 --rc genhtml_branch_coverage=1 00:04:48.086 --rc genhtml_function_coverage=1 00:04:48.086 --rc genhtml_legend=1 00:04:48.086 --rc geninfo_all_blocks=1 00:04:48.086 --rc geninfo_unexecuted_blocks=1 00:04:48.086 00:04:48.086 ' 00:04:48.086 17:46:06 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.086 --rc genhtml_branch_coverage=1 00:04:48.086 --rc genhtml_function_coverage=1 00:04:48.086 --rc genhtml_legend=1 00:04:48.086 --rc geninfo_all_blocks=1 00:04:48.086 --rc geninfo_unexecuted_blocks=1 00:04:48.086 00:04:48.086 ' 00:04:48.086 17:46:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:48.345 OK 00:04:48.345 17:46:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:48.345 00:04:48.345 real 0m0.182s 00:04:48.345 user 0m0.093s 00:04:48.345 sys 0m0.096s 00:04:48.345 17:46:06 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.345 17:46:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:48.345 ************************************ 00:04:48.345 END TEST rpc_client 00:04:48.345 ************************************ 00:04:48.345 17:46:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.345 17:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.345 17:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.345 17:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.345 ************************************ 00:04:48.345 START TEST json_config 00:04:48.345 ************************************ 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.345 17:46:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.345 17:46:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.345 17:46:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.345 17:46:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.345 17:46:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.345 17:46:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:48.345 17:46:06 json_config -- scripts/common.sh@345 -- # : 1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.345 17:46:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.345 17:46:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@353 -- # local d=1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.345 17:46:06 json_config -- scripts/common.sh@355 -- # echo 1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.345 17:46:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@353 -- # local d=2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.345 17:46:06 json_config -- scripts/common.sh@355 -- # echo 2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.345 17:46:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.345 17:46:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.345 17:46:06 json_config -- scripts/common.sh@368 -- # return 0 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.345 --rc genhtml_branch_coverage=1 00:04:48.345 --rc genhtml_function_coverage=1 00:04:48.345 --rc genhtml_legend=1 00:04:48.345 --rc geninfo_all_blocks=1 00:04:48.345 --rc geninfo_unexecuted_blocks=1 00:04:48.345 00:04:48.345 ' 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.345 --rc genhtml_branch_coverage=1 00:04:48.345 --rc genhtml_function_coverage=1 00:04:48.345 --rc genhtml_legend=1 00:04:48.345 --rc geninfo_all_blocks=1 00:04:48.345 --rc geninfo_unexecuted_blocks=1 00:04:48.345 00:04:48.345 ' 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.345 --rc genhtml_branch_coverage=1 00:04:48.345 --rc genhtml_function_coverage=1 00:04:48.345 --rc genhtml_legend=1 00:04:48.345 --rc geninfo_all_blocks=1 00:04:48.345 --rc geninfo_unexecuted_blocks=1 00:04:48.345 00:04:48.345 ' 00:04:48.345 17:46:06 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:48.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.345 --rc genhtml_branch_coverage=1 00:04:48.346 --rc genhtml_function_coverage=1 00:04:48.346 --rc genhtml_legend=1 00:04:48.346 --rc geninfo_all_blocks=1 00:04:48.346 --rc geninfo_unexecuted_blocks=1 00:04:48.346 00:04:48.346 ' 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5947e2df-d125-4472-98fc-d86088b051d0 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5947e2df-d125-4472-98fc-d86088b051d0 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.346 17:46:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.346 17:46:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.346 17:46:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.346 17:46:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.346 17:46:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.346 17:46:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.346 17:46:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.346 17:46:06 json_config -- paths/export.sh@5 -- # export PATH 00:04:48.346 17:46:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@51 -- # : 0 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.346 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.346 17:46:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.346 WARNING: No tests are enabled so not running JSON configuration tests 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:48.346 17:46:06 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:48.346 00:04:48.346 real 0m0.135s 00:04:48.346 user 0m0.084s 00:04:48.346 sys 0m0.054s 00:04:48.346 17:46:06 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.346 17:46:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.346 ************************************ 00:04:48.346 END TEST json_config 00:04:48.346 ************************************ 00:04:48.346 17:46:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.346 17:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.346 17:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.346 17:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.346 ************************************ 00:04:48.346 START TEST json_config_extra_key 00:04:48.346 ************************************ 00:04:48.346 17:46:06 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.606 17:46:06 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:48.606 17:46:06 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.607 --rc genhtml_branch_coverage=1 00:04:48.607 --rc genhtml_function_coverage=1 00:04:48.607 --rc genhtml_legend=1 00:04:48.607 --rc geninfo_all_blocks=1 00:04:48.607 --rc geninfo_unexecuted_blocks=1 00:04:48.607 00:04:48.607 ' 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.607 --rc genhtml_branch_coverage=1 00:04:48.607 --rc genhtml_function_coverage=1 00:04:48.607 --rc genhtml_legend=1 00:04:48.607 --rc geninfo_all_blocks=1 00:04:48.607 --rc geninfo_unexecuted_blocks=1 00:04:48.607 00:04:48.607 ' 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.607 --rc genhtml_branch_coverage=1 00:04:48.607 --rc genhtml_function_coverage=1 00:04:48.607 --rc genhtml_legend=1 00:04:48.607 --rc geninfo_all_blocks=1 00:04:48.607 --rc geninfo_unexecuted_blocks=1 00:04:48.607 00:04:48.607 ' 00:04:48.607 17:46:06 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.607 --rc genhtml_branch_coverage=1 00:04:48.607 --rc genhtml_function_coverage=1 00:04:48.607 --rc genhtml_legend=1 00:04:48.607 --rc geninfo_all_blocks=1 00:04:48.607 --rc geninfo_unexecuted_blocks=1 00:04:48.607 00:04:48.607 ' 00:04:48.607 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5947e2df-d125-4472-98fc-d86088b051d0 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5947e2df-d125-4472-98fc-d86088b051d0 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.607 17:46:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.607 17:46:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.607 17:46:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.607 17:46:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.607 17:46:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:48.607 17:46:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.607 17:46:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:48.608 INFO: launching applications... 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:48.608 17:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.608 Waiting for target to run... 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57745 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57745 /var/tmp/spdk_tgt.sock 00:04:48.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.608 17:46:06 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57745 ']' 00:04:48.608 17:46:06 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.608 17:46:06 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.608 17:46:06 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.608 17:46:06 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.608 17:46:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.608 17:46:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.608 [2024-10-25 17:46:06.970896] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:48.608 [2024-10-25 17:46:06.971022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57745 ] 00:04:48.870 [2024-10-25 17:46:07.296216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.130 [2024-10-25 17:46:07.397578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.696 17:46:07 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.696 17:46:07 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:49.696 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:49.696 INFO: shutting down applications... 00:04:49.696 17:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:49.696 17:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57745 ]] 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57745 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:04:49.696 17:46:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.261 17:46:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.261 17:46:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.261 17:46:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:04:50.261 17:46:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.522 17:46:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.522 17:46:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.522 17:46:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:04:50.522 17:46:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.087 17:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.087 17:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.087 17:46:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:04:51.087 17:46:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57745 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.657 SPDK target shutdown done 00:04:51.657 17:46:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.657 Success 00:04:51.657 17:46:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.657 00:04:51.657 real 0m3.158s 00:04:51.657 user 0m2.745s 00:04:51.657 sys 0m0.411s 00:04:51.657 17:46:09 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.657 ************************************ 00:04:51.657 END TEST json_config_extra_key 00:04:51.657 ************************************ 00:04:51.657 17:46:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 17:46:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.657 17:46:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.657 17:46:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.657 17:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 ************************************ 00:04:51.657 START TEST alias_rpc 00:04:51.657 ************************************ 00:04:51.657 17:46:09 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.657 * Looking for test storage... 00:04:51.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:51.657 17:46:10 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:51.657 17:46:10 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:51.657 17:46:10 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.917 17:46:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.917 --rc genhtml_branch_coverage=1 00:04:51.917 --rc genhtml_function_coverage=1 00:04:51.917 --rc genhtml_legend=1 00:04:51.917 --rc geninfo_all_blocks=1 00:04:51.917 --rc geninfo_unexecuted_blocks=1 00:04:51.917 00:04:51.917 ' 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.917 --rc genhtml_branch_coverage=1 00:04:51.917 --rc genhtml_function_coverage=1 00:04:51.917 --rc genhtml_legend=1 00:04:51.917 --rc geninfo_all_blocks=1 00:04:51.917 --rc geninfo_unexecuted_blocks=1 00:04:51.917 00:04:51.917 ' 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.917 --rc genhtml_branch_coverage=1 00:04:51.917 --rc genhtml_function_coverage=1 00:04:51.917 --rc genhtml_legend=1 00:04:51.917 --rc geninfo_all_blocks=1 00:04:51.917 --rc geninfo_unexecuted_blocks=1 00:04:51.917 00:04:51.917 ' 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.917 --rc genhtml_branch_coverage=1 00:04:51.917 --rc genhtml_function_coverage=1 00:04:51.917 --rc genhtml_legend=1 00:04:51.917 --rc geninfo_all_blocks=1 00:04:51.917 --rc geninfo_unexecuted_blocks=1 00:04:51.917 00:04:51.917 ' 00:04:51.917 17:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.917 17:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57838 00:04:51.917 17:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57838 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57838 ']' 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.917 17:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.917 17:46:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.917 [2024-10-25 17:46:10.184824] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:51.917 [2024-10-25 17:46:10.184965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57838 ] 00:04:51.917 [2024-10-25 17:46:10.347458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.177 [2024-10-25 17:46:10.481366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.747 17:46:11 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.747 17:46:11 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:52.747 17:46:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.007 17:46:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57838 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57838 ']' 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57838 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57838 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.007 killing process with pid 57838 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57838' 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@969 -- # kill 57838 00:04:53.007 17:46:11 alias_rpc -- common/autotest_common.sh@974 -- # wait 57838 00:04:54.923 00:04:54.923 real 0m3.169s 00:04:54.923 user 0m3.210s 00:04:54.923 sys 0m0.486s 00:04:54.923 17:46:13 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.923 ************************************ 00:04:54.923 END TEST alias_rpc 00:04:54.923 ************************************ 00:04:54.923 17:46:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.923 17:46:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.923 17:46:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.923 17:46:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.923 17:46:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.923 17:46:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.923 ************************************ 00:04:54.923 START TEST spdkcli_tcp 00:04:54.923 ************************************ 00:04:54.923 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.923 * Looking for test storage... 00:04:54.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.923 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:54.923 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:04:54.923 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:54.923 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.923 17:46:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.924 17:46:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.924 --rc genhtml_branch_coverage=1 00:04:54.924 --rc genhtml_function_coverage=1 00:04:54.924 --rc genhtml_legend=1 00:04:54.924 --rc geninfo_all_blocks=1 00:04:54.924 --rc geninfo_unexecuted_blocks=1 00:04:54.924 00:04:54.924 ' 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.924 --rc genhtml_branch_coverage=1 00:04:54.924 --rc genhtml_function_coverage=1 00:04:54.924 --rc genhtml_legend=1 00:04:54.924 --rc geninfo_all_blocks=1 00:04:54.924 --rc geninfo_unexecuted_blocks=1 00:04:54.924 00:04:54.924 ' 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.924 --rc genhtml_branch_coverage=1 00:04:54.924 --rc genhtml_function_coverage=1 00:04:54.924 --rc genhtml_legend=1 00:04:54.924 --rc geninfo_all_blocks=1 00:04:54.924 --rc geninfo_unexecuted_blocks=1 00:04:54.924 00:04:54.924 ' 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.924 --rc genhtml_branch_coverage=1 00:04:54.924 --rc genhtml_function_coverage=1 00:04:54.924 --rc genhtml_legend=1 00:04:54.924 --rc geninfo_all_blocks=1 00:04:54.924 --rc geninfo_unexecuted_blocks=1 00:04:54.924 00:04:54.924 ' 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57940 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57940 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57940 ']' 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.924 17:46:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.924 17:46:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.185 [2024-10-25 17:46:13.422255] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:55.185 [2024-10-25 17:46:13.422376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57940 ] 00:04:55.185 [2024-10-25 17:46:13.583344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.446 [2024-10-25 17:46:13.691141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.446 [2024-10-25 17:46:13.691200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.018 17:46:14 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.018 17:46:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:56.018 17:46:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.018 17:46:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57951 00:04:56.018 17:46:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.279 [ 00:04:56.279 "bdev_malloc_delete", 00:04:56.279 "bdev_malloc_create", 00:04:56.279 "bdev_null_resize", 00:04:56.279 "bdev_null_delete", 00:04:56.279 "bdev_null_create", 00:04:56.279 "bdev_nvme_cuse_unregister", 00:04:56.279 "bdev_nvme_cuse_register", 00:04:56.279 "bdev_opal_new_user", 00:04:56.279 "bdev_opal_set_lock_state", 00:04:56.279 "bdev_opal_delete", 00:04:56.279 "bdev_opal_get_info", 00:04:56.279 "bdev_opal_create", 00:04:56.279 "bdev_nvme_opal_revert", 00:04:56.279 "bdev_nvme_opal_init", 00:04:56.279 "bdev_nvme_send_cmd", 00:04:56.279 "bdev_nvme_set_keys", 00:04:56.279 "bdev_nvme_get_path_iostat", 00:04:56.279 "bdev_nvme_get_mdns_discovery_info", 00:04:56.279 "bdev_nvme_stop_mdns_discovery", 00:04:56.279 "bdev_nvme_start_mdns_discovery", 00:04:56.279 "bdev_nvme_set_multipath_policy", 00:04:56.279 "bdev_nvme_set_preferred_path", 00:04:56.279 "bdev_nvme_get_io_paths", 00:04:56.279 "bdev_nvme_remove_error_injection", 00:04:56.279 "bdev_nvme_add_error_injection", 00:04:56.279 "bdev_nvme_get_discovery_info", 00:04:56.279 "bdev_nvme_stop_discovery", 00:04:56.279 "bdev_nvme_start_discovery", 00:04:56.279 "bdev_nvme_get_controller_health_info", 00:04:56.279 "bdev_nvme_disable_controller", 00:04:56.279 "bdev_nvme_enable_controller", 00:04:56.279 "bdev_nvme_reset_controller", 00:04:56.279 "bdev_nvme_get_transport_statistics", 00:04:56.279 "bdev_nvme_apply_firmware", 00:04:56.279 "bdev_nvme_detach_controller", 00:04:56.279 "bdev_nvme_get_controllers", 00:04:56.279 "bdev_nvme_attach_controller", 00:04:56.279 "bdev_nvme_set_hotplug", 00:04:56.279 "bdev_nvme_set_options", 00:04:56.279 "bdev_passthru_delete", 00:04:56.279 "bdev_passthru_create", 00:04:56.279 "bdev_lvol_set_parent_bdev", 00:04:56.279 "bdev_lvol_set_parent", 00:04:56.279 "bdev_lvol_check_shallow_copy", 00:04:56.279 "bdev_lvol_start_shallow_copy", 00:04:56.279 "bdev_lvol_grow_lvstore", 00:04:56.279 "bdev_lvol_get_lvols", 00:04:56.279 "bdev_lvol_get_lvstores", 00:04:56.279 "bdev_lvol_delete", 00:04:56.279 "bdev_lvol_set_read_only", 00:04:56.279 "bdev_lvol_resize", 00:04:56.279 "bdev_lvol_decouple_parent", 00:04:56.279 "bdev_lvol_inflate", 00:04:56.279 "bdev_lvol_rename", 00:04:56.279 "bdev_lvol_clone_bdev", 00:04:56.279 "bdev_lvol_clone", 00:04:56.279 "bdev_lvol_snapshot", 00:04:56.279 "bdev_lvol_create", 00:04:56.279 "bdev_lvol_delete_lvstore", 00:04:56.279 "bdev_lvol_rename_lvstore", 00:04:56.279 "bdev_lvol_create_lvstore", 00:04:56.279 "bdev_raid_set_options", 00:04:56.279 "bdev_raid_remove_base_bdev", 00:04:56.279 "bdev_raid_add_base_bdev", 00:04:56.279 "bdev_raid_delete", 00:04:56.279 "bdev_raid_create", 00:04:56.279 "bdev_raid_get_bdevs", 00:04:56.279 "bdev_error_inject_error", 00:04:56.279 "bdev_error_delete", 00:04:56.279 "bdev_error_create", 00:04:56.279 "bdev_split_delete", 00:04:56.279 "bdev_split_create", 00:04:56.279 "bdev_delay_delete", 00:04:56.279 "bdev_delay_create", 00:04:56.279 "bdev_delay_update_latency", 00:04:56.279 "bdev_zone_block_delete", 00:04:56.279 "bdev_zone_block_create", 00:04:56.279 "blobfs_create", 00:04:56.279 "blobfs_detect", 00:04:56.279 "blobfs_set_cache_size", 00:04:56.279 "bdev_xnvme_delete", 00:04:56.279 "bdev_xnvme_create", 00:04:56.279 "bdev_aio_delete", 00:04:56.279 "bdev_aio_rescan", 00:04:56.279 "bdev_aio_create", 00:04:56.279 "bdev_ftl_set_property", 00:04:56.279 "bdev_ftl_get_properties", 00:04:56.279 "bdev_ftl_get_stats", 00:04:56.279 "bdev_ftl_unmap", 00:04:56.279 "bdev_ftl_unload", 00:04:56.279 "bdev_ftl_delete", 00:04:56.279 "bdev_ftl_load", 00:04:56.279 "bdev_ftl_create", 00:04:56.279 "bdev_virtio_attach_controller", 00:04:56.279 "bdev_virtio_scsi_get_devices", 00:04:56.279 "bdev_virtio_detach_controller", 00:04:56.279 "bdev_virtio_blk_set_hotplug", 00:04:56.279 "bdev_iscsi_delete", 00:04:56.279 "bdev_iscsi_create", 00:04:56.279 "bdev_iscsi_set_options", 00:04:56.279 "accel_error_inject_error", 00:04:56.279 "ioat_scan_accel_module", 00:04:56.279 "dsa_scan_accel_module", 00:04:56.279 "iaa_scan_accel_module", 00:04:56.279 "keyring_file_remove_key", 00:04:56.279 "keyring_file_add_key", 00:04:56.279 "keyring_linux_set_options", 00:04:56.279 "fsdev_aio_delete", 00:04:56.279 "fsdev_aio_create", 00:04:56.279 "iscsi_get_histogram", 00:04:56.279 "iscsi_enable_histogram", 00:04:56.279 "iscsi_set_options", 00:04:56.279 "iscsi_get_auth_groups", 00:04:56.279 "iscsi_auth_group_remove_secret", 00:04:56.279 "iscsi_auth_group_add_secret", 00:04:56.279 "iscsi_delete_auth_group", 00:04:56.279 "iscsi_create_auth_group", 00:04:56.279 "iscsi_set_discovery_auth", 00:04:56.279 "iscsi_get_options", 00:04:56.279 "iscsi_target_node_request_logout", 00:04:56.279 "iscsi_target_node_set_redirect", 00:04:56.279 "iscsi_target_node_set_auth", 00:04:56.279 "iscsi_target_node_add_lun", 00:04:56.279 "iscsi_get_stats", 00:04:56.279 "iscsi_get_connections", 00:04:56.279 "iscsi_portal_group_set_auth", 00:04:56.279 "iscsi_start_portal_group", 00:04:56.279 "iscsi_delete_portal_group", 00:04:56.279 "iscsi_create_portal_group", 00:04:56.279 "iscsi_get_portal_groups", 00:04:56.279 "iscsi_delete_target_node", 00:04:56.279 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.279 "iscsi_target_node_add_pg_ig_maps", 00:04:56.279 "iscsi_create_target_node", 00:04:56.279 "iscsi_get_target_nodes", 00:04:56.279 "iscsi_delete_initiator_group", 00:04:56.279 "iscsi_initiator_group_remove_initiators", 00:04:56.279 "iscsi_initiator_group_add_initiators", 00:04:56.279 "iscsi_create_initiator_group", 00:04:56.279 "iscsi_get_initiator_groups", 00:04:56.279 "nvmf_set_crdt", 00:04:56.279 "nvmf_set_config", 00:04:56.279 "nvmf_set_max_subsystems", 00:04:56.279 "nvmf_stop_mdns_prr", 00:04:56.279 "nvmf_publish_mdns_prr", 00:04:56.279 "nvmf_subsystem_get_listeners", 00:04:56.279 "nvmf_subsystem_get_qpairs", 00:04:56.279 "nvmf_subsystem_get_controllers", 00:04:56.279 "nvmf_get_stats", 00:04:56.279 "nvmf_get_transports", 00:04:56.279 "nvmf_create_transport", 00:04:56.279 "nvmf_get_targets", 00:04:56.279 "nvmf_delete_target", 00:04:56.279 "nvmf_create_target", 00:04:56.279 "nvmf_subsystem_allow_any_host", 00:04:56.279 "nvmf_subsystem_set_keys", 00:04:56.279 "nvmf_subsystem_remove_host", 00:04:56.279 "nvmf_subsystem_add_host", 00:04:56.279 "nvmf_ns_remove_host", 00:04:56.279 "nvmf_ns_add_host", 00:04:56.279 "nvmf_subsystem_remove_ns", 00:04:56.279 "nvmf_subsystem_set_ns_ana_group", 00:04:56.279 "nvmf_subsystem_add_ns", 00:04:56.279 "nvmf_subsystem_listener_set_ana_state", 00:04:56.279 "nvmf_discovery_get_referrals", 00:04:56.279 "nvmf_discovery_remove_referral", 00:04:56.279 "nvmf_discovery_add_referral", 00:04:56.279 "nvmf_subsystem_remove_listener", 00:04:56.279 "nvmf_subsystem_add_listener", 00:04:56.279 "nvmf_delete_subsystem", 00:04:56.279 "nvmf_create_subsystem", 00:04:56.279 "nvmf_get_subsystems", 00:04:56.279 "env_dpdk_get_mem_stats", 00:04:56.279 "nbd_get_disks", 00:04:56.279 "nbd_stop_disk", 00:04:56.279 "nbd_start_disk", 00:04:56.279 "ublk_recover_disk", 00:04:56.279 "ublk_get_disks", 00:04:56.279 "ublk_stop_disk", 00:04:56.279 "ublk_start_disk", 00:04:56.279 "ublk_destroy_target", 00:04:56.280 "ublk_create_target", 00:04:56.280 "virtio_blk_create_transport", 00:04:56.280 "virtio_blk_get_transports", 00:04:56.280 "vhost_controller_set_coalescing", 00:04:56.280 "vhost_get_controllers", 00:04:56.280 "vhost_delete_controller", 00:04:56.280 "vhost_create_blk_controller", 00:04:56.280 "vhost_scsi_controller_remove_target", 00:04:56.280 "vhost_scsi_controller_add_target", 00:04:56.280 "vhost_start_scsi_controller", 00:04:56.280 "vhost_create_scsi_controller", 00:04:56.280 "thread_set_cpumask", 00:04:56.280 "scheduler_set_options", 00:04:56.280 "framework_get_governor", 00:04:56.280 "framework_get_scheduler", 00:04:56.280 "framework_set_scheduler", 00:04:56.280 "framework_get_reactors", 00:04:56.280 "thread_get_io_channels", 00:04:56.280 "thread_get_pollers", 00:04:56.280 "thread_get_stats", 00:04:56.280 "framework_monitor_context_switch", 00:04:56.280 "spdk_kill_instance", 00:04:56.280 "log_enable_timestamps", 00:04:56.280 "log_get_flags", 00:04:56.280 "log_clear_flag", 00:04:56.280 "log_set_flag", 00:04:56.280 "log_get_level", 00:04:56.280 "log_set_level", 00:04:56.280 "log_get_print_level", 00:04:56.280 "log_set_print_level", 00:04:56.280 "framework_enable_cpumask_locks", 00:04:56.280 "framework_disable_cpumask_locks", 00:04:56.280 "framework_wait_init", 00:04:56.280 "framework_start_init", 00:04:56.280 "scsi_get_devices", 00:04:56.280 "bdev_get_histogram", 00:04:56.280 "bdev_enable_histogram", 00:04:56.280 "bdev_set_qos_limit", 00:04:56.280 "bdev_set_qd_sampling_period", 00:04:56.280 "bdev_get_bdevs", 00:04:56.280 "bdev_reset_iostat", 00:04:56.280 "bdev_get_iostat", 00:04:56.280 "bdev_examine", 00:04:56.280 "bdev_wait_for_examine", 00:04:56.280 "bdev_set_options", 00:04:56.280 "accel_get_stats", 00:04:56.280 "accel_set_options", 00:04:56.280 "accel_set_driver", 00:04:56.280 "accel_crypto_key_destroy", 00:04:56.280 "accel_crypto_keys_get", 00:04:56.280 "accel_crypto_key_create", 00:04:56.280 "accel_assign_opc", 00:04:56.280 "accel_get_module_info", 00:04:56.280 "accel_get_opc_assignments", 00:04:56.280 "vmd_rescan", 00:04:56.280 "vmd_remove_device", 00:04:56.280 "vmd_enable", 00:04:56.280 "sock_get_default_impl", 00:04:56.280 "sock_set_default_impl", 00:04:56.280 "sock_impl_set_options", 00:04:56.280 "sock_impl_get_options", 00:04:56.280 "iobuf_get_stats", 00:04:56.280 "iobuf_set_options", 00:04:56.280 "keyring_get_keys", 00:04:56.280 "framework_get_pci_devices", 00:04:56.280 "framework_get_config", 00:04:56.280 "framework_get_subsystems", 00:04:56.280 "fsdev_set_opts", 00:04:56.280 "fsdev_get_opts", 00:04:56.280 "trace_get_info", 00:04:56.280 "trace_get_tpoint_group_mask", 00:04:56.280 "trace_disable_tpoint_group", 00:04:56.280 "trace_enable_tpoint_group", 00:04:56.280 "trace_clear_tpoint_mask", 00:04:56.280 "trace_set_tpoint_mask", 00:04:56.280 "notify_get_notifications", 00:04:56.280 "notify_get_types", 00:04:56.280 "spdk_get_version", 00:04:56.280 "rpc_get_methods" 00:04:56.280 ] 00:04:56.280 17:46:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.280 17:46:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.280 17:46:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57940 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57940 ']' 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57940 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57940 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.280 killing process with pid 57940 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57940' 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57940 00:04:56.280 17:46:14 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57940 00:04:57.661 ************************************ 00:04:57.661 END TEST spdkcli_tcp 00:04:57.661 ************************************ 00:04:57.661 00:04:57.661 real 0m2.880s 00:04:57.661 user 0m5.166s 00:04:57.661 sys 0m0.430s 00:04:57.661 17:46:16 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.661 17:46:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.920 17:46:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.920 17:46:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.920 17:46:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.920 17:46:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.920 ************************************ 00:04:57.920 START TEST dpdk_mem_utility 00:04:57.920 ************************************ 00:04:57.920 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.920 * Looking for test storage... 00:04:57.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:57.920 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:57.920 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:04:57.920 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:57.920 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.920 17:46:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:57.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.921 17:46:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.921 --rc genhtml_branch_coverage=1 00:04:57.921 --rc genhtml_function_coverage=1 00:04:57.921 --rc genhtml_legend=1 00:04:57.921 --rc geninfo_all_blocks=1 00:04:57.921 --rc geninfo_unexecuted_blocks=1 00:04:57.921 00:04:57.921 ' 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.921 --rc genhtml_branch_coverage=1 00:04:57.921 --rc genhtml_function_coverage=1 00:04:57.921 --rc genhtml_legend=1 00:04:57.921 --rc geninfo_all_blocks=1 00:04:57.921 --rc geninfo_unexecuted_blocks=1 00:04:57.921 00:04:57.921 ' 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.921 --rc genhtml_branch_coverage=1 00:04:57.921 --rc genhtml_function_coverage=1 00:04:57.921 --rc genhtml_legend=1 00:04:57.921 --rc geninfo_all_blocks=1 00:04:57.921 --rc geninfo_unexecuted_blocks=1 00:04:57.921 00:04:57.921 ' 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:57.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.921 --rc genhtml_branch_coverage=1 00:04:57.921 --rc genhtml_function_coverage=1 00:04:57.921 --rc genhtml_legend=1 00:04:57.921 --rc geninfo_all_blocks=1 00:04:57.921 --rc geninfo_unexecuted_blocks=1 00:04:57.921 00:04:57.921 ' 00:04:57.921 17:46:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.921 17:46:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58045 00:04:57.921 17:46:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58045 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58045 ']' 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.921 17:46:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.921 17:46:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.921 [2024-10-25 17:46:16.351502] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:04:57.921 [2024-10-25 17:46:16.351863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58045 ] 00:04:58.180 [2024-10-25 17:46:16.508698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.180 [2024-10-25 17:46:16.604945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.122 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.122 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:59.122 17:46:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.122 17:46:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.122 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.122 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.122 { 00:04:59.122 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.122 } 00:04:59.122 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.122 17:46:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.122 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:59.122 1 heaps totaling size 816.000000 MiB 00:04:59.122 size: 816.000000 MiB heap id: 0 00:04:59.122 end heaps---------- 00:04:59.122 9 mempools totaling size 595.772034 MiB 00:04:59.122 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.122 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.122 size: 92.545471 MiB name: bdev_io_58045 00:04:59.122 size: 50.003479 MiB name: msgpool_58045 00:04:59.122 size: 36.509338 MiB name: fsdev_io_58045 00:04:59.122 size: 21.763794 MiB name: PDU_Pool 00:04:59.122 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.122 size: 4.133484 MiB name: evtpool_58045 00:04:59.122 size: 0.026123 MiB name: Session_Pool 00:04:59.122 end mempools------- 00:04:59.122 6 memzones totaling size 4.142822 MiB 00:04:59.122 size: 1.000366 MiB name: RG_ring_0_58045 00:04:59.122 size: 1.000366 MiB name: RG_ring_1_58045 00:04:59.122 size: 1.000366 MiB name: RG_ring_4_58045 00:04:59.122 size: 1.000366 MiB name: RG_ring_5_58045 00:04:59.122 size: 0.125366 MiB name: RG_ring_2_58045 00:04:59.122 size: 0.015991 MiB name: RG_ring_3_58045 00:04:59.122 end memzones------- 00:04:59.122 17:46:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.122 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:04:59.122 list of free elements. size: 16.790649 MiB 00:04:59.122 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:59.122 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:59.122 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:59.122 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:59.122 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:59.122 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:59.122 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:59.122 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:59.122 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:59.122 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:59.122 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:59.122 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:04:59.122 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:59.122 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:59.122 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:59.122 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:59.122 element at address: 0x200028000000 with size: 0.391663 MiB 00:04:59.122 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:59.122 list of standard malloc elements. size: 199.288452 MiB 00:04:59.122 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:59.122 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:59.122 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:59.122 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:59.122 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:59.122 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:59.122 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:59.122 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:59.122 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:59.122 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:59.122 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:59.122 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:59.122 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:59.122 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:59.122 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:59.123 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:59.123 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:59.123 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:59.124 element at address: 0x200028064440 with size: 0.000244 MiB 00:04:59.124 element at address: 0x200028064540 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b200 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:59.124 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:59.124 list of memzone associated elements. size: 599.920898 MiB 00:04:59.124 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:59.124 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.124 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:59.124 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.124 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:59.124 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58045_0 00:04:59.124 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:59.124 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58045_0 00:04:59.124 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:59.124 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58045_0 00:04:59.124 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:59.124 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.124 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:59.124 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.124 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:59.124 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58045_0 00:04:59.124 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:59.124 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58045 00:04:59.124 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:59.124 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58045 00:04:59.124 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:59.124 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.124 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:59.124 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.124 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:59.124 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.124 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:59.124 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.124 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:59.124 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58045 00:04:59.124 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:59.124 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58045 00:04:59.124 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:59.124 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58045 00:04:59.124 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:59.124 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58045 00:04:59.124 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:59.124 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58045 00:04:59.124 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:59.124 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58045 00:04:59.124 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:59.124 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.124 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:59.124 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.124 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:59.124 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.124 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:59.124 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58045 00:04:59.124 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:59.124 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58045 00:04:59.124 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:59.124 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.124 element at address: 0x200028064640 with size: 0.023804 MiB 00:04:59.124 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.124 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:59.124 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58045 00:04:59.124 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:04:59.124 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.124 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:59.124 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58045 00:04:59.124 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:59.124 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58045 00:04:59.124 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:59.124 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58045 00:04:59.125 element at address: 0x20002806b300 with size: 0.000366 MiB 00:04:59.125 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.125 17:46:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.125 17:46:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58045 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58045 ']' 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58045 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58045 00:04:59.125 killing process with pid 58045 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58045' 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58045 00:04:59.125 17:46:17 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58045 00:05:00.560 ************************************ 00:05:00.560 END TEST dpdk_mem_utility 00:05:00.560 ************************************ 00:05:00.560 00:05:00.560 real 0m2.719s 00:05:00.560 user 0m2.728s 00:05:00.560 sys 0m0.400s 00:05:00.560 17:46:18 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.560 17:46:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.560 17:46:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.560 17:46:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.560 17:46:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.560 17:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.560 ************************************ 00:05:00.560 START TEST event 00:05:00.560 ************************************ 00:05:00.560 17:46:18 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.560 * Looking for test storage... 00:05:00.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.560 17:46:18 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:00.560 17:46:18 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:00.560 17:46:18 event -- common/autotest_common.sh@1689 -- # lcov --version 00:05:00.820 17:46:19 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:00.820 17:46:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.820 17:46:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.820 17:46:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.820 17:46:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.820 17:46:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.820 17:46:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.820 17:46:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.820 17:46:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.820 17:46:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.820 17:46:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.820 17:46:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.820 17:46:19 event -- scripts/common.sh@344 -- # case "$op" in 00:05:00.820 17:46:19 event -- scripts/common.sh@345 -- # : 1 00:05:00.820 17:46:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.820 17:46:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.820 17:46:19 event -- scripts/common.sh@365 -- # decimal 1 00:05:00.820 17:46:19 event -- scripts/common.sh@353 -- # local d=1 00:05:00.820 17:46:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.820 17:46:19 event -- scripts/common.sh@355 -- # echo 1 00:05:00.820 17:46:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.820 17:46:19 event -- scripts/common.sh@366 -- # decimal 2 00:05:00.820 17:46:19 event -- scripts/common.sh@353 -- # local d=2 00:05:00.820 17:46:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.820 17:46:19 event -- scripts/common.sh@355 -- # echo 2 00:05:00.820 17:46:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.820 17:46:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.820 17:46:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.820 17:46:19 event -- scripts/common.sh@368 -- # return 0 00:05:00.820 17:46:19 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.820 17:46:19 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:00.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.820 --rc genhtml_branch_coverage=1 00:05:00.820 --rc genhtml_function_coverage=1 00:05:00.820 --rc genhtml_legend=1 00:05:00.820 --rc geninfo_all_blocks=1 00:05:00.820 --rc geninfo_unexecuted_blocks=1 00:05:00.820 00:05:00.820 ' 00:05:00.820 17:46:19 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:00.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.821 --rc genhtml_branch_coverage=1 00:05:00.821 --rc genhtml_function_coverage=1 00:05:00.821 --rc genhtml_legend=1 00:05:00.821 --rc geninfo_all_blocks=1 00:05:00.821 --rc geninfo_unexecuted_blocks=1 00:05:00.821 00:05:00.821 ' 00:05:00.821 17:46:19 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:00.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.821 --rc genhtml_branch_coverage=1 00:05:00.821 --rc genhtml_function_coverage=1 00:05:00.821 --rc genhtml_legend=1 00:05:00.821 --rc geninfo_all_blocks=1 00:05:00.821 --rc geninfo_unexecuted_blocks=1 00:05:00.821 00:05:00.821 ' 00:05:00.821 17:46:19 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:00.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.821 --rc genhtml_branch_coverage=1 00:05:00.821 --rc genhtml_function_coverage=1 00:05:00.821 --rc genhtml_legend=1 00:05:00.821 --rc geninfo_all_blocks=1 00:05:00.821 --rc geninfo_unexecuted_blocks=1 00:05:00.821 00:05:00.821 ' 00:05:00.821 17:46:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:00.821 17:46:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.821 17:46:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.821 17:46:19 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:00.821 17:46:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.821 17:46:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.821 ************************************ 00:05:00.821 START TEST event_perf 00:05:00.821 ************************************ 00:05:00.821 17:46:19 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.821 Running I/O for 1 seconds...[2024-10-25 17:46:19.093453] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:00.821 [2024-10-25 17:46:19.093685] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58137 ] 00:05:01.082 [2024-10-25 17:46:19.254775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.082 [2024-10-25 17:46:19.360499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.082 [2024-10-25 17:46:19.360809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.082 Running I/O for 1 seconds...[2024-10-25 17:46:19.361301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.082 [2024-10-25 17:46:19.361145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.468 00:05:02.468 lcore 0: 191355 00:05:02.468 lcore 1: 191356 00:05:02.468 lcore 2: 191359 00:05:02.468 lcore 3: 191356 00:05:02.468 done. 00:05:02.468 00:05:02.468 real 0m1.469s 00:05:02.468 user 0m4.259s 00:05:02.468 sys 0m0.090s 00:05:02.468 17:46:20 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.468 17:46:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.468 ************************************ 00:05:02.468 END TEST event_perf 00:05:02.468 ************************************ 00:05:02.468 17:46:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.468 17:46:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:02.468 17:46:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.468 17:46:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.468 ************************************ 00:05:02.468 START TEST event_reactor 00:05:02.468 ************************************ 00:05:02.468 17:46:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.468 [2024-10-25 17:46:20.619796] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:02.468 [2024-10-25 17:46:20.620292] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58182 ] 00:05:02.468 [2024-10-25 17:46:20.779373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.468 [2024-10-25 17:46:20.879290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.855 test_start 00:05:03.855 oneshot 00:05:03.855 tick 100 00:05:03.855 tick 100 00:05:03.855 tick 250 00:05:03.855 tick 100 00:05:03.855 tick 100 00:05:03.855 tick 250 00:05:03.855 tick 100 00:05:03.855 tick 500 00:05:03.855 tick 100 00:05:03.855 tick 100 00:05:03.855 tick 250 00:05:03.855 tick 100 00:05:03.855 tick 100 00:05:03.855 test_end 00:05:03.855 00:05:03.855 real 0m1.447s 00:05:03.855 user 0m1.279s 00:05:03.855 sys 0m0.059s 00:05:03.855 17:46:22 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.855 17:46:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:03.855 ************************************ 00:05:03.855 END TEST event_reactor 00:05:03.855 ************************************ 00:05:03.855 17:46:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.855 17:46:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:03.855 17:46:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.855 17:46:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.855 ************************************ 00:05:03.855 START TEST event_reactor_perf 00:05:03.855 ************************************ 00:05:03.855 17:46:22 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.855 [2024-10-25 17:46:22.132498] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:03.855 [2024-10-25 17:46:22.132726] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58213 ] 00:05:04.116 [2024-10-25 17:46:22.294428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.116 [2024-10-25 17:46:22.389729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.504 test_start 00:05:05.504 test_end 00:05:05.504 Performance: 314254 events per second 00:05:05.504 ************************************ 00:05:05.504 END TEST event_reactor_perf 00:05:05.504 ************************************ 00:05:05.504 00:05:05.504 real 0m1.436s 00:05:05.504 user 0m1.261s 00:05:05.504 sys 0m0.067s 00:05:05.504 17:46:23 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.504 17:46:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.504 17:46:23 event -- event/event.sh@49 -- # uname -s 00:05:05.504 17:46:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.504 17:46:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.504 17:46:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.504 17:46:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.504 17:46:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.504 ************************************ 00:05:05.504 START TEST event_scheduler 00:05:05.504 ************************************ 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.504 * Looking for test storage... 00:05:05.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.504 17:46:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:05.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.504 --rc genhtml_branch_coverage=1 00:05:05.504 --rc genhtml_function_coverage=1 00:05:05.504 --rc genhtml_legend=1 00:05:05.504 --rc geninfo_all_blocks=1 00:05:05.504 --rc geninfo_unexecuted_blocks=1 00:05:05.504 00:05:05.504 ' 00:05:05.504 17:46:23 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:05.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.504 --rc genhtml_branch_coverage=1 00:05:05.504 --rc genhtml_function_coverage=1 00:05:05.504 --rc genhtml_legend=1 00:05:05.505 --rc geninfo_all_blocks=1 00:05:05.505 --rc geninfo_unexecuted_blocks=1 00:05:05.505 00:05:05.505 ' 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.505 --rc genhtml_branch_coverage=1 00:05:05.505 --rc genhtml_function_coverage=1 00:05:05.505 --rc genhtml_legend=1 00:05:05.505 --rc geninfo_all_blocks=1 00:05:05.505 --rc geninfo_unexecuted_blocks=1 00:05:05.505 00:05:05.505 ' 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.505 --rc genhtml_branch_coverage=1 00:05:05.505 --rc genhtml_function_coverage=1 00:05:05.505 --rc genhtml_legend=1 00:05:05.505 --rc geninfo_all_blocks=1 00:05:05.505 --rc geninfo_unexecuted_blocks=1 00:05:05.505 00:05:05.505 ' 00:05:05.505 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.505 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58289 00:05:05.505 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.505 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58289 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58289 ']' 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.505 17:46:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.505 17:46:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.505 [2024-10-25 17:46:23.813931] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:05.505 [2024-10-25 17:46:23.814198] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:05:05.766 [2024-10-25 17:46:23.975339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:05.766 [2024-10-25 17:46:24.078523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.766 [2024-10-25 17:46:24.078699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.766 [2024-10-25 17:46:24.078910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.766 [2024-10-25 17:46:24.078911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:06.339 17:46:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.339 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.339 POWER: Cannot set governor of lcore 0 to performance 00:05:06.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.339 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.339 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.339 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.339 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.339 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.339 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.339 [2024-10-25 17:46:24.661236] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.339 [2024-10-25 17:46:24.661334] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:06.339 [2024-10-25 17:46:24.661360] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.339 [2024-10-25 17:46:24.661469] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.339 [2024-10-25 17:46:24.661618] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.339 [2024-10-25 17:46:24.661648] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.339 17:46:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.339 17:46:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 [2024-10-25 17:46:24.883384] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.600 17:46:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.600 17:46:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.600 17:46:24 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.600 17:46:24 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.600 17:46:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 ************************************ 00:05:06.600 START TEST scheduler_create_thread 00:05:06.600 ************************************ 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 2 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 3 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 4 00:05:06.600 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 5 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 6 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 7 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 8 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 9 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 10 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.601 17:46:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.601 17:46:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.601 17:46:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.601 17:46:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.544 17:46:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.544 17:46:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.544 17:46:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.544 17:46:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.544 17:46:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.929 ************************************ 00:05:08.929 END TEST scheduler_create_thread 00:05:08.929 ************************************ 00:05:08.929 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.929 00:05:08.929 real 0m2.135s 00:05:08.929 user 0m0.013s 00:05:08.929 sys 0m0.008s 00:05:08.929 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.929 17:46:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.929 17:46:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.929 17:46:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58289 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58289 ']' 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58289 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58289 00:05:08.929 killing process with pid 58289 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58289' 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58289 00:05:08.929 17:46:27 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58289 00:05:09.188 [2024-10-25 17:46:27.518281] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.828 ************************************ 00:05:09.828 END TEST event_scheduler 00:05:09.828 ************************************ 00:05:09.828 00:05:09.828 real 0m4.470s 00:05:09.828 user 0m7.651s 00:05:09.828 sys 0m0.320s 00:05:09.828 17:46:28 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.828 17:46:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.828 17:46:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.828 17:46:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.828 17:46:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.828 17:46:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.828 17:46:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.828 ************************************ 00:05:09.828 START TEST app_repeat 00:05:09.828 ************************************ 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.828 Process app_repeat pid: 58384 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58384 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58384' 00:05:09.828 spdk_app_start Round 0 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.828 17:46:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58384 /var/tmp/spdk-nbd.sock 00:05:09.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58384 ']' 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.828 17:46:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.828 [2024-10-25 17:46:28.188709] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:09.828 [2024-10-25 17:46:28.189403] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58384 ] 00:05:10.090 [2024-10-25 17:46:28.366356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.090 [2024-10-25 17:46:28.468424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.090 [2024-10-25 17:46:28.468428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.660 17:46:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.660 17:46:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:10.660 17:46:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.922 Malloc0 00:05:10.922 17:46:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.182 Malloc1 00:05:11.182 17:46:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.183 17:46:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.443 /dev/nbd0 00:05:11.443 17:46:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.443 17:46:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.443 1+0 records in 00:05:11.443 1+0 records out 00:05:11.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038372 s, 10.7 MB/s 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.443 17:46:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.443 17:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.443 17:46:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.443 17:46:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.704 /dev/nbd1 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.704 1+0 records in 00:05:11.704 1+0 records out 00:05:11.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298012 s, 13.7 MB/s 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.704 17:46:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.704 17:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.963 17:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.963 { 00:05:11.963 "nbd_device": "/dev/nbd0", 00:05:11.963 "bdev_name": "Malloc0" 00:05:11.963 }, 00:05:11.963 { 00:05:11.963 "nbd_device": "/dev/nbd1", 00:05:11.963 "bdev_name": "Malloc1" 00:05:11.964 } 00:05:11.964 ]' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.964 { 00:05:11.964 "nbd_device": "/dev/nbd0", 00:05:11.964 "bdev_name": "Malloc0" 00:05:11.964 }, 00:05:11.964 { 00:05:11.964 "nbd_device": "/dev/nbd1", 00:05:11.964 "bdev_name": "Malloc1" 00:05:11.964 } 00:05:11.964 ]' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.964 /dev/nbd1' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.964 /dev/nbd1' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.964 256+0 records in 00:05:11.964 256+0 records out 00:05:11.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00916144 s, 114 MB/s 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.964 256+0 records in 00:05:11.964 256+0 records out 00:05:11.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203685 s, 51.5 MB/s 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.964 256+0 records in 00:05:11.964 256+0 records out 00:05:11.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02005 s, 52.3 MB/s 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.964 17:46:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.222 17:46:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.479 17:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.736 17:46:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.736 17:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.736 17:46:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.736 17:46:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.736 17:46:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.995 17:46:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.931 [2024-10-25 17:46:32.043042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.931 [2024-10-25 17:46:32.137086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.931 [2024-10-25 17:46:32.137200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.931 [2024-10-25 17:46:32.245518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.931 [2024-10-25 17:46:32.245578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.454 spdk_app_start Round 1 00:05:16.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.454 17:46:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.454 17:46:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.454 17:46:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58384 /var/tmp/spdk-nbd.sock 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58384 ']' 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.454 17:46:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:16.454 17:46:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.454 Malloc0 00:05:16.454 17:46:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.712 Malloc1 00:05:16.712 17:46:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.712 17:46:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.970 /dev/nbd0 00:05:16.970 17:46:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.970 17:46:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.970 1+0 records in 00:05:16.970 1+0 records out 00:05:16.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214453 s, 19.1 MB/s 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:16.970 17:46:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:16.970 17:46:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.970 17:46:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.970 17:46:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.227 /dev/nbd1 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.227 1+0 records in 00:05:17.227 1+0 records out 00:05:17.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248555 s, 16.5 MB/s 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.227 17:46:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.227 { 00:05:17.227 "nbd_device": "/dev/nbd0", 00:05:17.227 "bdev_name": "Malloc0" 00:05:17.227 }, 00:05:17.227 { 00:05:17.227 "nbd_device": "/dev/nbd1", 00:05:17.227 "bdev_name": "Malloc1" 00:05:17.227 } 00:05:17.227 ]' 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.227 { 00:05:17.227 "nbd_device": "/dev/nbd0", 00:05:17.227 "bdev_name": "Malloc0" 00:05:17.227 }, 00:05:17.227 { 00:05:17.227 "nbd_device": "/dev/nbd1", 00:05:17.227 "bdev_name": "Malloc1" 00:05:17.227 } 00:05:17.227 ]' 00:05:17.227 17:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.485 /dev/nbd1' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.485 /dev/nbd1' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.485 256+0 records in 00:05:17.485 256+0 records out 00:05:17.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720472 s, 146 MB/s 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.485 256+0 records in 00:05:17.485 256+0 records out 00:05:17.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166112 s, 63.1 MB/s 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.485 256+0 records in 00:05:17.485 256+0 records out 00:05:17.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170563 s, 61.5 MB/s 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.485 17:46:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.743 17:46:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.001 17:46:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.258 17:46:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.258 17:46:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.515 17:46:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.151 [2024-10-25 17:46:37.298135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.151 [2024-10-25 17:46:37.367792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.151 [2024-10-25 17:46:37.367793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.151 [2024-10-25 17:46:37.464751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.151 [2024-10-25 17:46:37.464809] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.674 spdk_app_start Round 2 00:05:21.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.674 17:46:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.675 17:46:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.675 17:46:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58384 /var/tmp/spdk-nbd.sock 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58384 ']' 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.675 17:46:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:21.675 17:46:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.931 Malloc0 00:05:21.931 17:46:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.187 Malloc1 00:05:22.187 17:46:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.187 17:46:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.188 17:46:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.188 /dev/nbd0 00:05:22.444 17:46:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.444 17:46:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.444 17:46:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.444 17:46:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.444 17:46:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.444 17:46:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.444 17:46:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.445 1+0 records in 00:05:22.445 1+0 records out 00:05:22.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206646 s, 19.8 MB/s 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.445 17:46:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.445 17:46:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.445 17:46:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.445 /dev/nbd1 00:05:22.445 17:46:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.445 17:46:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.445 1+0 records in 00:05:22.445 1+0 records out 00:05:22.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016236 s, 25.2 MB/s 00:05:22.445 17:46:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.701 17:46:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.701 17:46:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.701 17:46:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.701 17:46:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.701 17:46:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.701 17:46:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.701 17:46:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.701 17:46:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.701 17:46:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.701 17:46:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.701 { 00:05:22.701 "nbd_device": "/dev/nbd0", 00:05:22.701 "bdev_name": "Malloc0" 00:05:22.701 }, 00:05:22.701 { 00:05:22.702 "nbd_device": "/dev/nbd1", 00:05:22.702 "bdev_name": "Malloc1" 00:05:22.702 } 00:05:22.702 ]' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.702 { 00:05:22.702 "nbd_device": "/dev/nbd0", 00:05:22.702 "bdev_name": "Malloc0" 00:05:22.702 }, 00:05:22.702 { 00:05:22.702 "nbd_device": "/dev/nbd1", 00:05:22.702 "bdev_name": "Malloc1" 00:05:22.702 } 00:05:22.702 ]' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.702 /dev/nbd1' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.702 /dev/nbd1' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.702 256+0 records in 00:05:22.702 256+0 records out 00:05:22.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442105 s, 237 MB/s 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.702 17:46:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.959 256+0 records in 00:05:22.959 256+0 records out 00:05:22.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140189 s, 74.8 MB/s 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.959 256+0 records in 00:05:22.959 256+0 records out 00:05:22.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145423 s, 72.1 MB/s 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.959 17:46:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.216 17:46:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.473 17:46:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.473 17:46:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.731 17:46:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.296 [2024-10-25 17:46:42.678882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.553 [2024-10-25 17:46:42.760190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.553 [2024-10-25 17:46:42.760296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.553 [2024-10-25 17:46:42.865573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.553 [2024-10-25 17:46:42.865616] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.077 17:46:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58384 /var/tmp/spdk-nbd.sock 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58384 ']' 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.077 17:46:45 event.app_repeat -- event/event.sh@39 -- # killprocess 58384 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58384 ']' 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58384 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58384 00:05:27.077 killing process with pid 58384 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58384' 00:05:27.077 17:46:45 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58384 00:05:27.078 17:46:45 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58384 00:05:27.643 spdk_app_start is called in Round 0. 00:05:27.643 Shutdown signal received, stop current app iteration 00:05:27.643 Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 reinitialization... 00:05:27.643 spdk_app_start is called in Round 1. 00:05:27.643 Shutdown signal received, stop current app iteration 00:05:27.643 Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 reinitialization... 00:05:27.643 spdk_app_start is called in Round 2. 00:05:27.643 Shutdown signal received, stop current app iteration 00:05:27.643 Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 reinitialization... 00:05:27.643 spdk_app_start is called in Round 3. 00:05:27.643 Shutdown signal received, stop current app iteration 00:05:27.643 17:46:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.643 17:46:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.643 00:05:27.643 real 0m17.724s 00:05:27.643 user 0m38.800s 00:05:27.643 sys 0m2.075s 00:05:27.643 17:46:45 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.643 ************************************ 00:05:27.643 END TEST app_repeat 00:05:27.643 ************************************ 00:05:27.643 17:46:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.643 17:46:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.643 17:46:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.643 17:46:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.643 17:46:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.643 17:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.643 ************************************ 00:05:27.643 START TEST cpu_locks 00:05:27.643 ************************************ 00:05:27.643 17:46:45 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.643 * Looking for test storage... 00:05:27.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.643 17:46:45 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:27.643 17:46:45 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:05:27.643 17:46:45 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:27.643 17:46:46 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:27.643 17:46:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.643 17:46:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.644 17:46:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.644 --rc genhtml_branch_coverage=1 00:05:27.644 --rc genhtml_function_coverage=1 00:05:27.644 --rc genhtml_legend=1 00:05:27.644 --rc geninfo_all_blocks=1 00:05:27.644 --rc geninfo_unexecuted_blocks=1 00:05:27.644 00:05:27.644 ' 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.644 --rc genhtml_branch_coverage=1 00:05:27.644 --rc genhtml_function_coverage=1 00:05:27.644 --rc genhtml_legend=1 00:05:27.644 --rc geninfo_all_blocks=1 00:05:27.644 --rc geninfo_unexecuted_blocks=1 00:05:27.644 00:05:27.644 ' 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.644 --rc genhtml_branch_coverage=1 00:05:27.644 --rc genhtml_function_coverage=1 00:05:27.644 --rc genhtml_legend=1 00:05:27.644 --rc geninfo_all_blocks=1 00:05:27.644 --rc geninfo_unexecuted_blocks=1 00:05:27.644 00:05:27.644 ' 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.644 --rc genhtml_branch_coverage=1 00:05:27.644 --rc genhtml_function_coverage=1 00:05:27.644 --rc genhtml_legend=1 00:05:27.644 --rc geninfo_all_blocks=1 00:05:27.644 --rc geninfo_unexecuted_blocks=1 00:05:27.644 00:05:27.644 ' 00:05:27.644 17:46:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.644 17:46:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.644 17:46:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.644 17:46:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.644 17:46:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.644 ************************************ 00:05:27.644 START TEST default_locks 00:05:27.644 ************************************ 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58820 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58820 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58820 ']' 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.644 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.902 [2024-10-25 17:46:46.116713] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:27.902 [2024-10-25 17:46:46.116805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58820 ] 00:05:27.902 [2024-10-25 17:46:46.266134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.161 [2024-10-25 17:46:46.345166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.762 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.762 17:46:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:28.762 17:46:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58820 00:05:28.762 17:46:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58820 00:05:28.762 17:46:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58820 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58820 ']' 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58820 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58820 00:05:28.762 killing process with pid 58820 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58820' 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58820 00:05:28.762 17:46:47 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58820 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58820 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58820 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58820 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58820 ']' 00:05:30.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.143 ERROR: process (pid: 58820) is no longer running 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.143 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58820) - No such process 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.143 00:05:30.143 real 0m2.199s 00:05:30.143 user 0m2.178s 00:05:30.143 sys 0m0.398s 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.143 ************************************ 00:05:30.143 END TEST default_locks 00:05:30.143 ************************************ 00:05:30.143 17:46:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.143 17:46:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.143 17:46:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.143 17:46:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.143 17:46:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.143 ************************************ 00:05:30.143 START TEST default_locks_via_rpc 00:05:30.143 ************************************ 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58873 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58873 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58873 ']' 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.143 17:46:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.143 [2024-10-25 17:46:48.373799] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:30.143 [2024-10-25 17:46:48.373892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58873 ] 00:05:30.143 [2024-10-25 17:46:48.521866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.402 [2024-10-25 17:46:48.602284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58873 ']' 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.968 killing process with pid 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58873' 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58873 00:05:30.968 17:46:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58873 00:05:32.343 ************************************ 00:05:32.343 END TEST default_locks_via_rpc 00:05:32.343 ************************************ 00:05:32.343 00:05:32.343 real 0m2.229s 00:05:32.343 user 0m2.205s 00:05:32.343 sys 0m0.405s 00:05:32.343 17:46:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.343 17:46:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.343 17:46:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.343 17:46:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.343 17:46:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.343 17:46:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.343 ************************************ 00:05:32.343 START TEST non_locking_app_on_locked_coremask 00:05:32.343 ************************************ 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58925 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58925 /var/tmp/spdk.sock 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58925 ']' 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.343 17:46:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.343 [2024-10-25 17:46:50.668029] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:32.343 [2024-10-25 17:46:50.668153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:05:32.602 [2024-10-25 17:46:50.822773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.602 [2024-10-25 17:46:50.907565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.166 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.166 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.166 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.166 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58941 00:05:33.166 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58941 /var/tmp/spdk2.sock 00:05:33.167 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58941 ']' 00:05:33.167 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.167 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.167 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.167 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.167 17:46:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.167 [2024-10-25 17:46:51.558863] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:33.167 [2024-10-25 17:46:51.558985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 00:05:33.424 [2024-10-25 17:46:51.723948] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.424 [2024-10-25 17:46:51.723993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.680 [2024-10-25 17:46:51.892871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.613 17:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.613 17:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:34.613 17:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58925 00:05:34.613 17:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.613 17:46:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58925 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58925 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58925 ']' 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58925 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58925 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.869 killing process with pid 58925 00:05:34.869 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58925' 00:05:34.870 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58925 00:05:34.870 17:46:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58925 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58941 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58941 ']' 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58941 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58941 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.394 killing process with pid 58941 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58941' 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58941 00:05:37.394 17:46:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58941 00:05:38.768 00:05:38.768 real 0m6.220s 00:05:38.768 user 0m6.496s 00:05:38.768 sys 0m0.772s 00:05:38.768 17:46:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.768 ************************************ 00:05:38.768 END TEST non_locking_app_on_locked_coremask 00:05:38.768 ************************************ 00:05:38.768 17:46:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.768 17:46:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.768 17:46:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.768 17:46:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.768 17:46:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.768 ************************************ 00:05:38.768 START TEST locking_app_on_unlocked_coremask 00:05:38.768 ************************************ 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59032 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59032 /var/tmp/spdk.sock 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59032 ']' 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.768 17:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.768 [2024-10-25 17:46:56.910245] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:38.768 [2024-10-25 17:46:56.910346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:38.768 [2024-10-25 17:46:57.059146] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.768 [2024-10-25 17:46:57.059194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.768 [2024-10-25 17:46:57.145410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59048 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59048 /var/tmp/spdk2.sock 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59048 ']' 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.700 17:46:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.700 [2024-10-25 17:46:57.866651] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:39.700 [2024-10-25 17:46:57.866766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:05:39.700 [2024-10-25 17:46:58.030760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.957 [2024-10-25 17:46:58.199041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.891 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.891 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.891 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59048 00:05:40.891 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59048 00:05:40.891 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59032 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59032 ']' 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59032 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59032 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.149 killing process with pid 59032 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59032' 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59032 00:05:41.149 17:46:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59032 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59048 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59048 ']' 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59048 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59048 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.674 killing process with pid 59048 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59048' 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59048 00:05:43.674 17:47:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59048 00:05:44.606 00:05:44.606 real 0m6.135s 00:05:44.606 user 0m6.444s 00:05:44.606 sys 0m0.820s 00:05:44.606 17:47:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.606 17:47:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.606 ************************************ 00:05:44.606 END TEST locking_app_on_unlocked_coremask 00:05:44.606 ************************************ 00:05:44.606 17:47:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:44.606 17:47:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.606 17:47:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.606 17:47:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.606 ************************************ 00:05:44.606 START TEST locking_app_on_locked_coremask 00:05:44.606 ************************************ 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59140 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59140 /var/tmp/spdk.sock 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59140 ']' 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.606 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.864 [2024-10-25 17:47:03.098212] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:44.864 [2024-10-25 17:47:03.098327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59140 ] 00:05:44.864 [2024-10-25 17:47:03.252791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.120 [2024-10-25 17:47:03.329169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59156 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59156 /var/tmp/spdk2.sock 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59156 /var/tmp/spdk2.sock 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59156 /var/tmp/spdk2.sock 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59156 ']' 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.686 17:47:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.686 [2024-10-25 17:47:03.954754] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:45.686 [2024-10-25 17:47:03.954873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59156 ] 00:05:45.686 [2024-10-25 17:47:04.117042] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59140 has claimed it. 00:05:45.686 [2024-10-25 17:47:04.117085] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59156) - No such process 00:05:46.251 ERROR: process (pid: 59156) is no longer running 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59140 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59140 00:05:46.251 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59140 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59140 ']' 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59140 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59140 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.508 killing process with pid 59140 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59140' 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59140 00:05:46.508 17:47:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59140 00:05:47.881 00:05:47.881 real 0m2.947s 00:05:47.881 user 0m3.123s 00:05:47.881 sys 0m0.532s 00:05:47.881 17:47:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.881 17:47:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.881 ************************************ 00:05:47.881 END TEST locking_app_on_locked_coremask 00:05:47.881 ************************************ 00:05:47.881 17:47:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:47.881 17:47:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.881 17:47:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.881 17:47:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.881 ************************************ 00:05:47.881 START TEST locking_overlapped_coremask 00:05:47.881 ************************************ 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59209 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59209 /var/tmp/spdk.sock 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59209 ']' 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:47.882 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.882 [2024-10-25 17:47:06.079993] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:47.882 [2024-10-25 17:47:06.080105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59209 ] 00:05:47.882 [2024-10-25 17:47:06.234783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.882 [2024-10-25 17:47:06.313174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.140 [2024-10-25 17:47:06.313756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.140 [2024-10-25 17:47:06.313795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59227 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59227 /var/tmp/spdk2.sock 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59227 /var/tmp/spdk2.sock 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59227 /var/tmp/spdk2.sock 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59227 ']' 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.707 17:47:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.707 [2024-10-25 17:47:06.968744] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:48.707 [2024-10-25 17:47:06.969139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59227 ] 00:05:48.965 [2024-10-25 17:47:07.141455] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59209 has claimed it. 00:05:48.965 [2024-10-25 17:47:07.145603] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.223 ERROR: process (pid: 59227) is no longer running 00:05:49.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59227) - No such process 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.223 17:47:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59209 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59209 ']' 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59209 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59209 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.224 killing process with pid 59209 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59209' 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59209 00:05:49.224 17:47:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59209 00:05:50.597 00:05:50.597 real 0m2.801s 00:05:50.597 user 0m7.666s 00:05:50.597 sys 0m0.384s 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.597 ************************************ 00:05:50.597 END TEST locking_overlapped_coremask 00:05:50.597 ************************************ 00:05:50.597 17:47:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:50.597 17:47:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.597 17:47:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.597 17:47:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.597 ************************************ 00:05:50.597 START TEST locking_overlapped_coremask_via_rpc 00:05:50.597 ************************************ 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59280 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59280 ']' 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:50.597 17:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.597 [2024-10-25 17:47:08.921233] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:50.597 [2024-10-25 17:47:08.921351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:05:50.855 [2024-10-25 17:47:09.077306] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.855 [2024-10-25 17:47:09.077345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.855 [2024-10-25 17:47:09.162673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.855 [2024-10-25 17:47:09.162900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.855 [2024-10-25 17:47:09.162910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59297 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59297 /var/tmp/spdk2.sock 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59297 ']' 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.419 17:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 [2024-10-25 17:47:09.779855] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:51.419 [2024-10-25 17:47:09.780272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:05:51.676 [2024-10-25 17:47:09.953605] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.676 [2024-10-25 17:47:09.953649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.933 [2024-10-25 17:47:10.159631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.933 [2024-10-25 17:47:10.159704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.933 [2024-10-25 17:47:10.159729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.868 [2024-10-25 17:47:11.273677] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59280 has claimed it. 00:05:52.868 request: 00:05:52.868 { 00:05:52.868 "method": "framework_enable_cpumask_locks", 00:05:52.868 "req_id": 1 00:05:52.868 } 00:05:52.868 Got JSON-RPC error response 00:05:52.868 response: 00:05:52.868 { 00:05:52.868 "code": -32603, 00:05:52.868 "message": "Failed to claim CPU core: 2" 00:05:52.868 } 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59280 ']' 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.868 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59297 /var/tmp/spdk2.sock 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59297 ']' 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.126 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.384 00:05:53.384 real 0m2.853s 00:05:53.384 user 0m0.995s 00:05:53.384 sys 0m0.121s 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.384 ************************************ 00:05:53.384 END TEST locking_overlapped_coremask_via_rpc 00:05:53.384 ************************************ 00:05:53.384 17:47:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.384 17:47:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.384 17:47:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59280 ]] 00:05:53.384 17:47:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59280 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59280 ']' 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59280 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59280 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.384 killing process with pid 59280 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59280' 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59280 00:05:53.384 17:47:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59280 00:05:54.757 17:47:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59297 ]] 00:05:54.757 17:47:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59297 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59297 ']' 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59297 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59297 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:54.757 killing process with pid 59297 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59297' 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59297 00:05:54.757 17:47:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59297 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59280 ]] 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59280 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59280 ']' 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59280 00:05:56.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59280) - No such process 00:05:56.131 Process with pid 59280 is not found 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59280 is not found' 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59297 ]] 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59297 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59297 ']' 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59297 00:05:56.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59297) - No such process 00:05:56.131 Process with pid 59297 is not found 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59297 is not found' 00:05:56.131 17:47:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.131 00:05:56.131 real 0m28.244s 00:05:56.131 user 0m49.113s 00:05:56.131 sys 0m4.170s 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.131 ************************************ 00:05:56.131 END TEST cpu_locks 00:05:56.131 ************************************ 00:05:56.131 17:47:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.131 00:05:56.131 real 0m55.279s 00:05:56.131 user 1m42.538s 00:05:56.131 sys 0m6.995s 00:05:56.131 17:47:14 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.131 17:47:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.132 ************************************ 00:05:56.132 END TEST event 00:05:56.132 ************************************ 00:05:56.132 17:47:14 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:56.132 17:47:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.132 17:47:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.132 17:47:14 -- common/autotest_common.sh@10 -- # set +x 00:05:56.132 ************************************ 00:05:56.132 START TEST thread 00:05:56.132 ************************************ 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:56.132 * Looking for test storage... 00:05:56.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:56.132 17:47:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.132 17:47:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.132 17:47:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.132 17:47:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.132 17:47:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.132 17:47:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.132 17:47:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.132 17:47:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.132 17:47:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.132 17:47:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.132 17:47:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.132 17:47:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:56.132 17:47:14 thread -- scripts/common.sh@345 -- # : 1 00:05:56.132 17:47:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.132 17:47:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.132 17:47:14 thread -- scripts/common.sh@365 -- # decimal 1 00:05:56.132 17:47:14 thread -- scripts/common.sh@353 -- # local d=1 00:05:56.132 17:47:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.132 17:47:14 thread -- scripts/common.sh@355 -- # echo 1 00:05:56.132 17:47:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.132 17:47:14 thread -- scripts/common.sh@366 -- # decimal 2 00:05:56.132 17:47:14 thread -- scripts/common.sh@353 -- # local d=2 00:05:56.132 17:47:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.132 17:47:14 thread -- scripts/common.sh@355 -- # echo 2 00:05:56.132 17:47:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.132 17:47:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.132 17:47:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.132 17:47:14 thread -- scripts/common.sh@368 -- # return 0 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.132 --rc genhtml_branch_coverage=1 00:05:56.132 --rc genhtml_function_coverage=1 00:05:56.132 --rc genhtml_legend=1 00:05:56.132 --rc geninfo_all_blocks=1 00:05:56.132 --rc geninfo_unexecuted_blocks=1 00:05:56.132 00:05:56.132 ' 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.132 --rc genhtml_branch_coverage=1 00:05:56.132 --rc genhtml_function_coverage=1 00:05:56.132 --rc genhtml_legend=1 00:05:56.132 --rc geninfo_all_blocks=1 00:05:56.132 --rc geninfo_unexecuted_blocks=1 00:05:56.132 00:05:56.132 ' 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.132 --rc genhtml_branch_coverage=1 00:05:56.132 --rc genhtml_function_coverage=1 00:05:56.132 --rc genhtml_legend=1 00:05:56.132 --rc geninfo_all_blocks=1 00:05:56.132 --rc geninfo_unexecuted_blocks=1 00:05:56.132 00:05:56.132 ' 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.132 --rc genhtml_branch_coverage=1 00:05:56.132 --rc genhtml_function_coverage=1 00:05:56.132 --rc genhtml_legend=1 00:05:56.132 --rc geninfo_all_blocks=1 00:05:56.132 --rc geninfo_unexecuted_blocks=1 00:05:56.132 00:05:56.132 ' 00:05:56.132 17:47:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.132 17:47:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.132 ************************************ 00:05:56.132 START TEST thread_poller_perf 00:05:56.132 ************************************ 00:05:56.132 17:47:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:56.132 [2024-10-25 17:47:14.372854] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:56.132 [2024-10-25 17:47:14.373198] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:05:56.132 [2024-10-25 17:47:14.530989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.390 [2024-10-25 17:47:14.611776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.390 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:57.323 [2024-10-25T17:47:15.758Z] ====================================== 00:05:57.323 [2024-10-25T17:47:15.758Z] busy:2609903708 (cyc) 00:05:57.323 [2024-10-25T17:47:15.758Z] total_run_count: 387000 00:05:57.323 [2024-10-25T17:47:15.758Z] tsc_hz: 2600000000 (cyc) 00:05:57.323 [2024-10-25T17:47:15.758Z] ====================================== 00:05:57.323 [2024-10-25T17:47:15.758Z] poller_cost: 6743 (cyc), 2593 (nsec) 00:05:57.323 00:05:57.323 real 0m1.397s 00:05:57.323 ************************************ 00:05:57.323 END TEST thread_poller_perf 00:05:57.323 ************************************ 00:05:57.323 user 0m1.221s 00:05:57.323 sys 0m0.069s 00:05:57.323 17:47:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.323 17:47:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.582 17:47:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:57.582 17:47:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:57.582 17:47:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.582 17:47:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.582 ************************************ 00:05:57.582 START TEST thread_poller_perf 00:05:57.582 ************************************ 00:05:57.582 17:47:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:57.582 [2024-10-25 17:47:15.816333] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:57.582 [2024-10-25 17:47:15.816438] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:05:57.582 [2024-10-25 17:47:15.974552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.839 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:57.839 [2024-10-25 17:47:16.072777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.214 [2024-10-25T17:47:17.649Z] ====================================== 00:05:59.214 [2024-10-25T17:47:17.649Z] busy:2603109984 (cyc) 00:05:59.214 [2024-10-25T17:47:17.649Z] total_run_count: 3924000 00:05:59.214 [2024-10-25T17:47:17.649Z] tsc_hz: 2600000000 (cyc) 00:05:59.214 [2024-10-25T17:47:17.649Z] ====================================== 00:05:59.214 [2024-10-25T17:47:17.649Z] poller_cost: 663 (cyc), 255 (nsec) 00:05:59.214 ************************************ 00:05:59.214 END TEST thread_poller_perf 00:05:59.214 ************************************ 00:05:59.214 00:05:59.214 real 0m1.438s 00:05:59.214 user 0m1.268s 00:05:59.214 sys 0m0.064s 00:05:59.214 17:47:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.214 17:47:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 17:47:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:59.214 ************************************ 00:05:59.214 END TEST thread 00:05:59.214 ************************************ 00:05:59.214 00:05:59.214 real 0m3.042s 00:05:59.214 user 0m2.583s 00:05:59.214 sys 0m0.249s 00:05:59.214 17:47:17 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.214 17:47:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 17:47:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:59.214 17:47:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:59.214 17:47:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.214 17:47:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.214 17:47:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 ************************************ 00:05:59.214 START TEST app_cmdline 00:05:59.214 ************************************ 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:59.214 * Looking for test storage... 00:05:59.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.214 17:47:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.214 --rc genhtml_branch_coverage=1 00:05:59.214 --rc genhtml_function_coverage=1 00:05:59.214 --rc genhtml_legend=1 00:05:59.214 --rc geninfo_all_blocks=1 00:05:59.214 --rc geninfo_unexecuted_blocks=1 00:05:59.214 00:05:59.214 ' 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.214 --rc genhtml_branch_coverage=1 00:05:59.214 --rc genhtml_function_coverage=1 00:05:59.214 --rc genhtml_legend=1 00:05:59.214 --rc geninfo_all_blocks=1 00:05:59.214 --rc geninfo_unexecuted_blocks=1 00:05:59.214 00:05:59.214 ' 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.214 --rc genhtml_branch_coverage=1 00:05:59.214 --rc genhtml_function_coverage=1 00:05:59.214 --rc genhtml_legend=1 00:05:59.214 --rc geninfo_all_blocks=1 00:05:59.214 --rc geninfo_unexecuted_blocks=1 00:05:59.214 00:05:59.214 ' 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.214 --rc genhtml_branch_coverage=1 00:05:59.214 --rc genhtml_function_coverage=1 00:05:59.214 --rc genhtml_legend=1 00:05:59.214 --rc geninfo_all_blocks=1 00:05:59.214 --rc geninfo_unexecuted_blocks=1 00:05:59.214 00:05:59.214 ' 00:05:59.214 17:47:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:59.214 17:47:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59567 00:05:59.214 17:47:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59567 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59567 ']' 00:05:59.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.214 17:47:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.214 17:47:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.214 [2024-10-25 17:47:17.511639] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:05:59.214 [2024-10-25 17:47:17.511751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59567 ] 00:05:59.473 [2024-10-25 17:47:17.670884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.473 [2024-10-25 17:47:17.766896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.039 17:47:18 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.039 17:47:18 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:00.039 17:47:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:00.298 { 00:06:00.298 "version": "SPDK v25.01-pre git sha1 e83d2213a", 00:06:00.298 "fields": { 00:06:00.298 "major": 25, 00:06:00.298 "minor": 1, 00:06:00.298 "patch": 0, 00:06:00.298 "suffix": "-pre", 00:06:00.298 "commit": "e83d2213a" 00:06:00.298 } 00:06:00.298 } 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:00.298 17:47:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:00.298 17:47:18 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.557 request: 00:06:00.557 { 00:06:00.557 "method": "env_dpdk_get_mem_stats", 00:06:00.557 "req_id": 1 00:06:00.557 } 00:06:00.557 Got JSON-RPC error response 00:06:00.557 response: 00:06:00.557 { 00:06:00.557 "code": -32601, 00:06:00.557 "message": "Method not found" 00:06:00.557 } 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.557 17:47:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59567 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59567 ']' 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59567 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59567 00:06:00.557 killing process with pid 59567 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59567' 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@969 -- # kill 59567 00:06:00.557 17:47:18 app_cmdline -- common/autotest_common.sh@974 -- # wait 59567 00:06:01.936 00:06:01.936 real 0m3.007s 00:06:01.936 user 0m3.314s 00:06:01.936 sys 0m0.410s 00:06:01.936 17:47:20 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.936 ************************************ 00:06:01.936 END TEST app_cmdline 00:06:01.936 ************************************ 00:06:01.936 17:47:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.936 17:47:20 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.936 17:47:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.936 17:47:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.936 17:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.936 ************************************ 00:06:01.936 START TEST version 00:06:01.936 ************************************ 00:06:01.936 17:47:20 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:02.193 * Looking for test storage... 00:06:02.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:02.193 17:47:20 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:02.193 17:47:20 version -- common/autotest_common.sh@1689 -- # lcov --version 00:06:02.193 17:47:20 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:02.194 17:47:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.194 17:47:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.194 17:47:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.194 17:47:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.194 17:47:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.194 17:47:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.194 17:47:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.194 17:47:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.194 17:47:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.194 17:47:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.194 17:47:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.194 17:47:20 version -- scripts/common.sh@344 -- # case "$op" in 00:06:02.194 17:47:20 version -- scripts/common.sh@345 -- # : 1 00:06:02.194 17:47:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.194 17:47:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.194 17:47:20 version -- scripts/common.sh@365 -- # decimal 1 00:06:02.194 17:47:20 version -- scripts/common.sh@353 -- # local d=1 00:06:02.194 17:47:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.194 17:47:20 version -- scripts/common.sh@355 -- # echo 1 00:06:02.194 17:47:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.194 17:47:20 version -- scripts/common.sh@366 -- # decimal 2 00:06:02.194 17:47:20 version -- scripts/common.sh@353 -- # local d=2 00:06:02.194 17:47:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.194 17:47:20 version -- scripts/common.sh@355 -- # echo 2 00:06:02.194 17:47:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.194 17:47:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.194 17:47:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.194 17:47:20 version -- scripts/common.sh@368 -- # return 0 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:02.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.194 --rc genhtml_branch_coverage=1 00:06:02.194 --rc genhtml_function_coverage=1 00:06:02.194 --rc genhtml_legend=1 00:06:02.194 --rc geninfo_all_blocks=1 00:06:02.194 --rc geninfo_unexecuted_blocks=1 00:06:02.194 00:06:02.194 ' 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:02.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.194 --rc genhtml_branch_coverage=1 00:06:02.194 --rc genhtml_function_coverage=1 00:06:02.194 --rc genhtml_legend=1 00:06:02.194 --rc geninfo_all_blocks=1 00:06:02.194 --rc geninfo_unexecuted_blocks=1 00:06:02.194 00:06:02.194 ' 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:02.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.194 --rc genhtml_branch_coverage=1 00:06:02.194 --rc genhtml_function_coverage=1 00:06:02.194 --rc genhtml_legend=1 00:06:02.194 --rc geninfo_all_blocks=1 00:06:02.194 --rc geninfo_unexecuted_blocks=1 00:06:02.194 00:06:02.194 ' 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:02.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.194 --rc genhtml_branch_coverage=1 00:06:02.194 --rc genhtml_function_coverage=1 00:06:02.194 --rc genhtml_legend=1 00:06:02.194 --rc geninfo_all_blocks=1 00:06:02.194 --rc geninfo_unexecuted_blocks=1 00:06:02.194 00:06:02.194 ' 00:06:02.194 17:47:20 version -- app/version.sh@17 -- # get_header_version major 00:06:02.194 17:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # cut -f2 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.194 17:47:20 version -- app/version.sh@17 -- # major=25 00:06:02.194 17:47:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # cut -f2 00:06:02.194 17:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.194 17:47:20 version -- app/version.sh@18 -- # minor=1 00:06:02.194 17:47:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:02.194 17:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # cut -f2 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.194 17:47:20 version -- app/version.sh@19 -- # patch=0 00:06:02.194 17:47:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # cut -f2 00:06:02.194 17:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:02.194 17:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.194 17:47:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:02.194 17:47:20 version -- app/version.sh@22 -- # version=25.1 00:06:02.194 17:47:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:02.194 17:47:20 version -- app/version.sh@28 -- # version=25.1rc0 00:06:02.194 17:47:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:02.194 17:47:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:02.194 17:47:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:02.194 17:47:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:02.194 00:06:02.194 real 0m0.187s 00:06:02.194 user 0m0.133s 00:06:02.194 sys 0m0.082s 00:06:02.194 ************************************ 00:06:02.194 END TEST version 00:06:02.194 ************************************ 00:06:02.194 17:47:20 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.194 17:47:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:02.194 17:47:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:02.194 17:47:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:02.194 17:47:20 -- spdk/autotest.sh@194 -- # uname -s 00:06:02.194 17:47:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:02.194 17:47:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:02.194 17:47:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:02.194 17:47:20 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:02.194 17:47:20 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:02.194 17:47:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:02.194 17:47:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.194 17:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.194 ************************************ 00:06:02.194 START TEST blockdev_nvme 00:06:02.194 ************************************ 00:06:02.194 17:47:20 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:02.194 * Looking for test storage... 00:06:02.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.512 17:47:20 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:02.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.512 --rc genhtml_branch_coverage=1 00:06:02.512 --rc genhtml_function_coverage=1 00:06:02.512 --rc genhtml_legend=1 00:06:02.512 --rc geninfo_all_blocks=1 00:06:02.512 --rc geninfo_unexecuted_blocks=1 00:06:02.512 00:06:02.512 ' 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:02.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.512 --rc genhtml_branch_coverage=1 00:06:02.512 --rc genhtml_function_coverage=1 00:06:02.512 --rc genhtml_legend=1 00:06:02.512 --rc geninfo_all_blocks=1 00:06:02.512 --rc geninfo_unexecuted_blocks=1 00:06:02.512 00:06:02.512 ' 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:02.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.512 --rc genhtml_branch_coverage=1 00:06:02.512 --rc genhtml_function_coverage=1 00:06:02.512 --rc genhtml_legend=1 00:06:02.512 --rc geninfo_all_blocks=1 00:06:02.512 --rc geninfo_unexecuted_blocks=1 00:06:02.512 00:06:02.512 ' 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:02.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.512 --rc genhtml_branch_coverage=1 00:06:02.512 --rc genhtml_function_coverage=1 00:06:02.512 --rc genhtml_legend=1 00:06:02.512 --rc geninfo_all_blocks=1 00:06:02.512 --rc geninfo_unexecuted_blocks=1 00:06:02.512 00:06:02.512 ' 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:02.512 17:47:20 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:02.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59745 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:02.512 17:47:20 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59745 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 59745 ']' 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.512 17:47:20 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.513 17:47:20 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.513 17:47:20 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:02.513 17:47:20 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.513 17:47:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.513 [2024-10-25 17:47:20.785043] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:02.513 [2024-10-25 17:47:20.785156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:06:02.785 [2024-10-25 17:47:20.941130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.785 [2024-10-25 17:47:21.034446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.351 17:47:21 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.351 17:47:21 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:03.351 17:47:21 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:03.351 17:47:21 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:03.351 17:47:21 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:03.351 17:47:21 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:03.351 17:47:21 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:03.351 17:47:21 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:03.351 17:47:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.351 17:47:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.610 17:47:21 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.610 17:47:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:03.610 17:47:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.610 17:47:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.610 17:47:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.610 17:47:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.610 17:47:22 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.610 17:47:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:03.610 17:47:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:03.610 17:47:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:03.610 17:47:22 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.610 17:47:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.869 17:47:22 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.869 17:47:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:03.870 17:47:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fb57cbfe-74fe-4634-b726-a6cae4088093"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fb57cbfe-74fe-4634-b726-a6cae4088093",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8d82ccd9-c2c7-4c8e-91b5-ed176f56649e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8d82ccd9-c2c7-4c8e-91b5-ed176f56649e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e865af28-d8a8-4cd8-ab5d-bd7e94770a3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e865af28-d8a8-4cd8-ab5d-bd7e94770a3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3f15aef8-9ceb-4d3e-b920-3c762dc6a540"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3f15aef8-9ceb-4d3e-b920-3c762dc6a540",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "30d2c67d-70d6-493f-8255-35fc4bf08a99"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "30d2c67d-70d6-493f-8255-35fc4bf08a99",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "755287f9-0a01-4c50-ad21-5a36c98f1cc9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "755287f9-0a01-4c50-ad21-5a36c98f1cc9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:03.870 17:47:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:03.870 17:47:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:03.870 17:47:22 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:03.870 17:47:22 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:03.870 17:47:22 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59745 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 59745 ']' 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 59745 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59745 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.870 killing process with pid 59745 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59745' 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 59745 00:06:03.870 17:47:22 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 59745 00:06:05.251 17:47:23 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:05.251 17:47:23 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:05.251 17:47:23 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:05.251 17:47:23 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.251 17:47:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.251 ************************************ 00:06:05.251 START TEST bdev_hello_world 00:06:05.251 ************************************ 00:06:05.251 17:47:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:05.251 [2024-10-25 17:47:23.657477] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:05.251 [2024-10-25 17:47:23.657611] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59823 ] 00:06:05.512 [2024-10-25 17:47:23.812051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.512 [2024-10-25 17:47:23.912702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.082 [2024-10-25 17:47:24.449500] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:06.082 [2024-10-25 17:47:24.449545] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:06.082 [2024-10-25 17:47:24.449580] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:06.082 [2024-10-25 17:47:24.452141] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:06.082 [2024-10-25 17:47:24.453012] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:06.082 [2024-10-25 17:47:24.453043] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:06.082 [2024-10-25 17:47:24.453903] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:06.082 00:06:06.082 [2024-10-25 17:47:24.453935] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:07.029 ************************************ 00:06:07.029 END TEST bdev_hello_world 00:06:07.029 ************************************ 00:06:07.029 00:06:07.029 real 0m1.568s 00:06:07.029 user 0m1.289s 00:06:07.029 sys 0m0.171s 00:06:07.029 17:47:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.029 17:47:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:07.029 17:47:25 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:07.029 17:47:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:07.029 17:47:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.029 17:47:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.029 ************************************ 00:06:07.029 START TEST bdev_bounds 00:06:07.029 ************************************ 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:07.029 Process bdevio pid: 59864 00:06:07.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59864 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59864' 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59864 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 59864 ']' 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:07.029 17:47:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:07.029 [2024-10-25 17:47:25.284647] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:07.029 [2024-10-25 17:47:25.284900] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59864 ] 00:06:07.029 [2024-10-25 17:47:25.447354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.289 [2024-10-25 17:47:25.549295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.289 [2024-10-25 17:47:25.549589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.289 [2024-10-25 17:47:25.549629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.860 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.860 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:07.860 17:47:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:07.860 I/O targets: 00:06:07.860 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:07.860 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:07.860 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:07.860 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:07.860 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:07.860 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:07.860 00:06:07.860 00:06:07.860 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.860 http://cunit.sourceforge.net/ 00:06:07.860 00:06:07.860 00:06:07.860 Suite: bdevio tests on: Nvme3n1 00:06:07.860 Test: blockdev write read block ...passed 00:06:07.860 Test: blockdev write zeroes read block ...passed 00:06:07.860 Test: blockdev write zeroes read no split ...passed 00:06:07.860 Test: blockdev write zeroes read split ...passed 00:06:07.860 Test: blockdev write zeroes read split partial ...passed 00:06:07.860 Test: blockdev reset ...[2024-10-25 17:47:26.257627] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:07.860 passed 00:06:07.860 Test: blockdev write read 8 blocks ...[2024-10-25 17:47:26.262477] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:07.860 passed 00:06:07.860 Test: blockdev write read size > 128k ...passed 00:06:07.860 Test: blockdev write read invalid size ...passed 00:06:07.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:07.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:07.861 Test: blockdev write read max offset ...passed 00:06:07.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:07.861 Test: blockdev writev readv 8 blocks ...passed 00:06:07.861 Test: blockdev writev readv 30 x 1block ...passed 00:06:07.861 Test: blockdev writev readv block ...passed 00:06:07.861 Test: blockdev writev readv size > 128k ...passed 00:06:07.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:07.861 Test: blockdev comparev and writev ...[2024-10-25 17:47:26.281314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad40a000 len:0x1000 00:06:07.861 [2024-10-25 17:47:26.281359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:07.861 passed 00:06:07.861 Test: blockdev nvme passthru rw ...passed 00:06:07.861 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:47:26.283665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:07.861 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:07.861 [2024-10-25 17:47:26.283779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:07.861 passed 00:06:07.861 Test: blockdev copy ...passed 00:06:07.861 Suite: bdevio tests on: Nvme2n3 00:06:07.861 Test: blockdev write read block ...passed 00:06:08.120 Test: blockdev write zeroes read block ...passed 00:06:08.120 Test: blockdev write zeroes read no split ...passed 00:06:08.120 Test: blockdev write zeroes read split ...passed 00:06:08.120 Test: blockdev write zeroes read split partial ...passed 00:06:08.120 Test: blockdev reset ...[2024-10-25 17:47:26.341824] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:08.120 [2024-10-25 17:47:26.345276] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:06:08.120 Test: blockdev write read 8 blocks ...passed 00:06:08.120 Test: blockdev write read size > 128k ...successful. 00:06:08.120 passed 00:06:08.120 Test: blockdev write read invalid size ...passed 00:06:08.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.120 Test: blockdev write read max offset ...passed 00:06:08.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.120 Test: blockdev writev readv 8 blocks ...passed 00:06:08.120 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.120 Test: blockdev writev readv block ...passed 00:06:08.120 Test: blockdev writev readv size > 128k ...passed 00:06:08.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.120 Test: blockdev comparev and writev ...[2024-10-25 17:47:26.351428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:08.120 Test: blockdev nvme passthru rw ...passed 00:06:08.120 Test: blockdev nvme passthru vendor specific ...passed 00:06:08.120 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x290e06000 len:0x1000 00:06:08.120 [2024-10-25 17:47:26.351546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.120 [2024-10-25 17:47:26.352029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.120 [2024-10-25 17:47:26.352054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:08.120 passed 00:06:08.120 Test: blockdev copy ...passed 00:06:08.120 Suite: bdevio tests on: Nvme2n2 00:06:08.120 Test: blockdev write read block ...passed 00:06:08.120 Test: blockdev write zeroes read block ...passed 00:06:08.120 Test: blockdev write zeroes read no split ...passed 00:06:08.120 Test: blockdev write zeroes read split ...passed 00:06:08.120 Test: blockdev write zeroes read split partial ...passed 00:06:08.120 Test: blockdev reset ...[2024-10-25 17:47:26.415726] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:08.120 [2024-10-25 17:47:26.419665] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:06:08.120 Test: blockdev write read 8 blocks ...successful. 00:06:08.120 passed 00:06:08.120 Test: blockdev write read size > 128k ...passed 00:06:08.120 Test: blockdev write read invalid size ...passed 00:06:08.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.120 Test: blockdev write read max offset ...passed 00:06:08.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.120 Test: blockdev writev readv 8 blocks ...passed 00:06:08.120 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.120 Test: blockdev writev readv block ...passed 00:06:08.120 Test: blockdev writev readv size > 128k ...passed 00:06:08.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.120 Test: blockdev comparev and writev ...[2024-10-25 17:47:26.426806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e3c000 len:0x1000 00:06:08.120 [2024-10-25 17:47:26.426930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.120 passed 00:06:08.120 Test: blockdev nvme passthru rw ...passed 00:06:08.120 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:47:26.427617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.120 [2024-10-25 17:47:26.427721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:08.120 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:08.120 passed 00:06:08.120 Test: blockdev copy ...passed 00:06:08.120 Suite: bdevio tests on: Nvme2n1 00:06:08.120 Test: blockdev write read block ...passed 00:06:08.120 Test: blockdev write zeroes read block ...passed 00:06:08.120 Test: blockdev write zeroes read no split ...passed 00:06:08.120 Test: blockdev write zeroes read split ...passed 00:06:08.120 Test: blockdev write zeroes read split partial ...passed 00:06:08.120 Test: blockdev reset ...[2024-10-25 17:47:26.484383] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:08.120 [2024-10-25 17:47:26.487260] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:06:08.120 Test: blockdev write read 8 blocks ...successful. 00:06:08.120 passed 00:06:08.120 Test: blockdev write read size > 128k ...passed 00:06:08.120 Test: blockdev write read invalid size ...passed 00:06:08.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.120 Test: blockdev write read max offset ...passed 00:06:08.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.120 Test: blockdev writev readv 8 blocks ...passed 00:06:08.120 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.120 Test: blockdev writev readv block ...passed 00:06:08.120 Test: blockdev writev readv size > 128k ...passed 00:06:08.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.120 Test: blockdev comparev and writev ...[2024-10-25 17:47:26.493514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e38000 len:0x1000 00:06:08.120 [2024-10-25 17:47:26.493564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.120 passed 00:06:08.120 Test: blockdev nvme passthru rw ...passed 00:06:08.120 Test: blockdev nvme passthru vendor specific ...passed 00:06:08.120 Test: blockdev nvme admin passthru ...[2024-10-25 17:47:26.494117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.120 [2024-10-25 17:47:26.494145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:08.120 passed 00:06:08.120 Test: blockdev copy ...passed 00:06:08.120 Suite: bdevio tests on: Nvme1n1 00:06:08.120 Test: blockdev write read block ...passed 00:06:08.120 Test: blockdev write zeroes read block ...passed 00:06:08.120 Test: blockdev write zeroes read no split ...passed 00:06:08.120 Test: blockdev write zeroes read split ...passed 00:06:08.120 Test: blockdev write zeroes read split partial ...passed 00:06:08.120 Test: blockdev reset ...[2024-10-25 17:47:26.544202] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:08.120 [2024-10-25 17:47:26.547389] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller passed 00:06:08.120 Test: blockdev write read 8 blocks ...successful. 00:06:08.120 passed 00:06:08.120 Test: blockdev write read size > 128k ...passed 00:06:08.120 Test: blockdev write read invalid size ...passed 00:06:08.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.120 Test: blockdev write read max offset ...passed 00:06:08.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.120 Test: blockdev writev readv 8 blocks ...passed 00:06:08.120 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.120 Test: blockdev writev readv block ...passed 00:06:08.380 Test: blockdev writev readv size > 128k ...passed 00:06:08.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.380 Test: blockdev comparev and writev ...[2024-10-25 17:47:26.556168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e34000 len:0x1000 00:06:08.380 [2024-10-25 17:47:26.556286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.380 passed 00:06:08.380 Test: blockdev nvme passthru rw ...passed 00:06:08.380 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:47:26.556991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.380 [2024-10-25 17:47:26.557083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:08.380 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:08.380 passed 00:06:08.380 Test: blockdev copy ...passed 00:06:08.380 Suite: bdevio tests on: Nvme0n1 00:06:08.380 Test: blockdev write read block ...passed 00:06:08.380 Test: blockdev write zeroes read block ...passed 00:06:08.380 Test: blockdev write zeroes read no split ...passed 00:06:08.380 Test: blockdev write zeroes read split ...passed 00:06:08.380 Test: blockdev write zeroes read split partial ...passed 00:06:08.380 Test: blockdev reset ...[2024-10-25 17:47:26.619177] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:08.380 [2024-10-25 17:47:26.621887] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller passed 00:06:08.380 Test: blockdev write read 8 blocks ...successful. 00:06:08.380 passed 00:06:08.380 Test: blockdev write read size > 128k ...passed 00:06:08.380 Test: blockdev write read invalid size ...passed 00:06:08.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.380 Test: blockdev write read max offset ...passed 00:06:08.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.380 Test: blockdev writev readv 8 blocks ...passed 00:06:08.380 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.380 Test: blockdev writev readv block ...passed 00:06:08.380 Test: blockdev writev readv size > 128k ...passed 00:06:08.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.380 Test: blockdev comparev and writev ...passed 00:06:08.380 Test: blockdev nvme passthru rw ...[2024-10-25 17:47:26.628570] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:08.380 separate metadata which is not supported yet. 00:06:08.380 passed 00:06:08.380 Test: blockdev nvme passthru vendor specific ...passed 00:06:08.380 Test: blockdev nvme admin passthru ...[2024-10-25 17:47:26.628933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:08.380 [2024-10-25 17:47:26.628973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:08.380 passed 00:06:08.380 Test: blockdev copy ...passed 00:06:08.380 00:06:08.380 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.380 suites 6 6 n/a 0 0 00:06:08.380 tests 138 138 138 0 0 00:06:08.380 asserts 893 893 893 0 n/a 00:06:08.380 00:06:08.380 Elapsed time = 1.084 seconds 00:06:08.380 0 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59864 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 59864 ']' 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 59864 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59864 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59864' 00:06:08.380 killing process with pid 59864 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 59864 00:06:08.380 17:47:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 59864 00:06:08.951 17:47:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:08.951 00:06:08.951 real 0m2.108s 00:06:08.951 user 0m5.336s 00:06:08.951 sys 0m0.270s 00:06:08.951 17:47:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.951 17:47:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:08.951 ************************************ 00:06:08.951 END TEST bdev_bounds 00:06:08.951 ************************************ 00:06:08.951 17:47:27 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:08.951 17:47:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:08.951 17:47:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.951 17:47:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.212 ************************************ 00:06:09.212 START TEST bdev_nbd 00:06:09.212 ************************************ 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59919 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59919 /var/tmp/spdk-nbd.sock 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 59919 ']' 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.212 17:47:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:09.212 [2024-10-25 17:47:27.458899] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:09.212 [2024-10-25 17:47:27.459132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.212 [2024-10-25 17:47:27.617634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.471 [2024-10-25 17:47:27.714869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.042 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.303 1+0 records in 00:06:10.303 1+0 records out 00:06:10.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653322 s, 6.3 MB/s 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.303 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.565 1+0 records in 00:06:10.565 1+0 records out 00:06:10.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119731 s, 3.4 MB/s 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.565 1+0 records in 00:06:10.565 1+0 records out 00:06:10.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014521 s, 2.8 MB/s 00:06:10.565 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.827 17:47:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.827 1+0 records in 00:06:10.827 1+0 records out 00:06:10.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110809 s, 3.7 MB/s 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.827 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.089 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.090 1+0 records in 00:06:11.090 1+0 records out 00:06:11.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715056 s, 5.7 MB/s 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.090 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.351 1+0 records in 00:06:11.351 1+0 records out 00:06:11.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700168 s, 5.9 MB/s 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.351 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.612 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:11.612 { 00:06:11.612 "nbd_device": "/dev/nbd0", 00:06:11.612 "bdev_name": "Nvme0n1" 00:06:11.612 }, 00:06:11.612 { 00:06:11.612 "nbd_device": "/dev/nbd1", 00:06:11.612 "bdev_name": "Nvme1n1" 00:06:11.612 }, 00:06:11.612 { 00:06:11.612 "nbd_device": "/dev/nbd2", 00:06:11.612 "bdev_name": "Nvme2n1" 00:06:11.612 }, 00:06:11.612 { 00:06:11.612 "nbd_device": "/dev/nbd3", 00:06:11.612 "bdev_name": "Nvme2n2" 00:06:11.612 }, 00:06:11.612 { 00:06:11.612 "nbd_device": "/dev/nbd4", 00:06:11.613 "bdev_name": "Nvme2n3" 00:06:11.613 }, 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd5", 00:06:11.613 "bdev_name": "Nvme3n1" 00:06:11.613 } 00:06:11.613 ]' 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd0", 00:06:11.613 "bdev_name": "Nvme0n1" 00:06:11.613 }, 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd1", 00:06:11.613 "bdev_name": "Nvme1n1" 00:06:11.613 }, 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd2", 00:06:11.613 "bdev_name": "Nvme2n1" 00:06:11.613 }, 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd3", 00:06:11.613 "bdev_name": "Nvme2n2" 00:06:11.613 }, 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd4", 00:06:11.613 "bdev_name": "Nvme2n3" 00:06:11.613 }, 00:06:11.613 { 00:06:11.613 "nbd_device": "/dev/nbd5", 00:06:11.613 "bdev_name": "Nvme3n1" 00:06:11.613 } 00:06:11.613 ]' 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.613 17:47:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.874 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.134 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.395 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:12.654 17:47:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.654 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:12.914 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:12.914 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:12.914 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:12.914 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.914 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.914 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:12.915 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.915 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.915 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.915 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.915 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:13.175 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.176 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:13.435 /dev/nbd0 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.435 1+0 records in 00:06:13.435 1+0 records out 00:06:13.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246594 s, 16.6 MB/s 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.435 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:13.693 /dev/nbd1 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.693 1+0 records in 00:06:13.693 1+0 records out 00:06:13.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485443 s, 8.4 MB/s 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.693 17:47:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:13.951 /dev/nbd10 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.951 1+0 records in 00:06:13.951 1+0 records out 00:06:13.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365559 s, 11.2 MB/s 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.951 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:14.210 /dev/nbd11 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.210 1+0 records in 00:06:14.210 1+0 records out 00:06:14.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036823 s, 11.1 MB/s 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.210 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:14.210 /dev/nbd12 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.468 1+0 records in 00:06:14.468 1+0 records out 00:06:14.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451416 s, 9.1 MB/s 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:14.468 /dev/nbd13 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.468 1+0 records in 00:06:14.468 1+0 records out 00:06:14.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425766 s, 9.6 MB/s 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.468 17:47:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.726 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.726 { 00:06:14.726 "nbd_device": "/dev/nbd0", 00:06:14.726 "bdev_name": "Nvme0n1" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd1", 00:06:14.727 "bdev_name": "Nvme1n1" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd10", 00:06:14.727 "bdev_name": "Nvme2n1" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd11", 00:06:14.727 "bdev_name": "Nvme2n2" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd12", 00:06:14.727 "bdev_name": "Nvme2n3" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd13", 00:06:14.727 "bdev_name": "Nvme3n1" 00:06:14.727 } 00:06:14.727 ]' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd0", 00:06:14.727 "bdev_name": "Nvme0n1" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd1", 00:06:14.727 "bdev_name": "Nvme1n1" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd10", 00:06:14.727 "bdev_name": "Nvme2n1" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd11", 00:06:14.727 "bdev_name": "Nvme2n2" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd12", 00:06:14.727 "bdev_name": "Nvme2n3" 00:06:14.727 }, 00:06:14.727 { 00:06:14.727 "nbd_device": "/dev/nbd13", 00:06:14.727 "bdev_name": "Nvme3n1" 00:06:14.727 } 00:06:14.727 ]' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.727 /dev/nbd1 00:06:14.727 /dev/nbd10 00:06:14.727 /dev/nbd11 00:06:14.727 /dev/nbd12 00:06:14.727 /dev/nbd13' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.727 /dev/nbd1 00:06:14.727 /dev/nbd10 00:06:14.727 /dev/nbd11 00:06:14.727 /dev/nbd12 00:06:14.727 /dev/nbd13' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:14.727 256+0 records in 00:06:14.727 256+0 records out 00:06:14.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604415 s, 173 MB/s 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.727 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.984 256+0 records in 00:06:14.984 256+0 records out 00:06:14.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0498331 s, 21.0 MB/s 00:06:14.984 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.984 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.984 256+0 records in 00:06:14.984 256+0 records out 00:06:14.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0513541 s, 20.4 MB/s 00:06:14.984 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:14.985 256+0 records in 00:06:14.985 256+0 records out 00:06:14.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0510698 s, 20.5 MB/s 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:14.985 256+0 records in 00:06:14.985 256+0 records out 00:06:14.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0496913 s, 21.1 MB/s 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:14.985 256+0 records in 00:06:14.985 256+0 records out 00:06:14.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0515064 s, 20.4 MB/s 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.985 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:15.243 256+0 records in 00:06:15.243 256+0 records out 00:06:15.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0543839 s, 19.3 MB/s 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.243 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.501 17:47:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.759 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.017 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.278 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.537 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.794 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.794 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.794 17:47:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:16.794 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:17.053 malloc_lvol_verify 00:06:17.053 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:17.053 c124be0b-15e0-4515-be57-cc6d76798c4a 00:06:17.318 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:17.318 73ff3e00-6c67-4398-9391-534bedf633da 00:06:17.318 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:17.579 /dev/nbd0 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:17.579 mke2fs 1.47.0 (5-Feb-2023) 00:06:17.579 Discarding device blocks: 0/4096 done 00:06:17.579 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:17.579 00:06:17.579 Allocating group tables: 0/1 done 00:06:17.579 Writing inode tables: 0/1 done 00:06:17.579 Creating journal (1024 blocks): done 00:06:17.579 Writing superblocks and filesystem accounting information: 0/1 done 00:06:17.579 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.579 17:47:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59919 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 59919 ']' 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 59919 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59919 00:06:17.838 killing process with pid 59919 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59919' 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 59919 00:06:17.838 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 59919 00:06:18.778 17:47:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:18.778 00:06:18.778 real 0m9.584s 00:06:18.778 user 0m13.945s 00:06:18.778 sys 0m2.952s 00:06:18.778 ************************************ 00:06:18.778 END TEST bdev_nbd 00:06:18.778 ************************************ 00:06:18.778 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.778 17:47:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:18.778 17:47:37 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:18.778 17:47:37 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:18.778 skipping fio tests on NVMe due to multi-ns failures. 00:06:18.778 17:47:37 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:18.778 17:47:37 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:18.778 17:47:37 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:18.779 17:47:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:18.779 17:47:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.779 17:47:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:18.779 ************************************ 00:06:18.779 START TEST bdev_verify 00:06:18.779 ************************************ 00:06:18.779 17:47:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:18.779 [2024-10-25 17:47:37.090937] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:18.779 [2024-10-25 17:47:37.091060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60292 ] 00:06:19.040 [2024-10-25 17:47:37.251118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.040 [2024-10-25 17:47:37.364466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.040 [2024-10-25 17:47:37.364594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.606 Running I/O for 5 seconds... 00:06:21.935 18880.00 IOPS, 73.75 MiB/s [2024-10-25T17:47:41.316Z] 19968.00 IOPS, 78.00 MiB/s [2024-10-25T17:47:42.257Z] 20010.67 IOPS, 78.17 MiB/s [2024-10-25T17:47:43.202Z] 19488.00 IOPS, 76.12 MiB/s [2024-10-25T17:47:43.202Z] 19187.20 IOPS, 74.95 MiB/s 00:06:24.768 Latency(us) 00:06:24.768 [2024-10-25T17:47:43.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.768 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x0 length 0xbd0bd 00:06:24.768 Nvme0n1 : 5.08 1563.03 6.11 0.00 0.00 81580.87 16031.11 80659.69 00:06:24.768 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:24.768 Nvme0n1 : 5.09 1585.80 6.19 0.00 0.00 80526.84 14821.22 75820.11 00:06:24.768 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x0 length 0xa0000 00:06:24.768 Nvme1n1 : 5.08 1562.54 6.10 0.00 0.00 81368.70 19055.85 66947.54 00:06:24.768 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0xa0000 length 0xa0000 00:06:24.768 Nvme1n1 : 5.09 1585.33 6.19 0.00 0.00 80404.00 16031.11 70173.93 00:06:24.768 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x0 length 0x80000 00:06:24.768 Nvme2n1 : 5.08 1562.07 6.10 0.00 0.00 81193.95 21072.34 64931.05 00:06:24.768 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x80000 length 0x80000 00:06:24.768 Nvme2n1 : 5.09 1584.86 6.19 0.00 0.00 80255.32 15930.29 68157.44 00:06:24.768 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x0 length 0x80000 00:06:24.768 Nvme2n2 : 5.08 1561.55 6.10 0.00 0.00 81012.35 20366.57 64931.05 00:06:24.768 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x80000 length 0x80000 00:06:24.768 Nvme2n2 : 5.09 1584.42 6.19 0.00 0.00 80094.03 15930.29 68560.74 00:06:24.768 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x0 length 0x80000 00:06:24.768 Nvme2n3 : 5.09 1570.85 6.14 0.00 0.00 80418.23 4234.63 68560.74 00:06:24.768 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x80000 length 0x80000 00:06:24.768 Nvme2n3 : 5.09 1583.96 6.19 0.00 0.00 79924.84 14518.74 69770.63 00:06:24.768 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x0 length 0x20000 00:06:24.768 Nvme3n1 : 5.10 1569.68 6.13 0.00 0.00 80311.07 6805.66 70173.93 00:06:24.768 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.768 Verification LBA range: start 0x20000 length 0x20000 00:06:24.768 Nvme3n1 : 5.09 1583.50 6.19 0.00 0.00 79757.49 7007.31 70577.23 00:06:24.768 [2024-10-25T17:47:43.203Z] =================================================================================================================== 00:06:24.768 [2024-10-25T17:47:43.203Z] Total : 18897.59 73.82 0.00 0.00 80567.09 4234.63 80659.69 00:06:26.156 00:06:26.156 real 0m7.196s 00:06:26.156 user 0m13.387s 00:06:26.156 sys 0m0.238s 00:06:26.156 17:47:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.156 ************************************ 00:06:26.156 END TEST bdev_verify 00:06:26.156 ************************************ 00:06:26.156 17:47:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.156 17:47:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.156 17:47:44 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:26.156 17:47:44 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.156 17:47:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.156 ************************************ 00:06:26.156 START TEST bdev_verify_big_io 00:06:26.156 ************************************ 00:06:26.156 17:47:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.156 [2024-10-25 17:47:44.365114] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:26.156 [2024-10-25 17:47:44.365267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 00:06:26.156 [2024-10-25 17:47:44.531463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.417 [2024-10-25 17:47:44.665025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.417 [2024-10-25 17:47:44.665135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.991 Running I/O for 5 seconds... 00:06:32.835 745.00 IOPS, 46.56 MiB/s [2024-10-25T17:47:51.530Z] 2025.00 IOPS, 126.56 MiB/s 00:06:33.095 Latency(us) 00:06:33.095 [2024-10-25T17:47:51.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.095 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x0 length 0xbd0b 00:06:33.095 Nvme0n1 : 5.61 114.00 7.12 0.00 0.00 1062119.27 23794.61 1167952.34 00:06:33.095 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:33.095 Nvme0n1 : 5.82 88.00 5.50 0.00 0.00 1393019.08 33272.12 1142141.24 00:06:33.095 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x0 length 0xa000 00:06:33.095 Nvme1n1 : 5.68 123.91 7.74 0.00 0.00 968901.07 64527.75 1025991.29 00:06:33.095 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0xa000 length 0xa000 00:06:33.095 Nvme1n1 : 5.95 89.88 5.62 0.00 0.00 1324446.47 120989.54 1167952.34 00:06:33.095 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x0 length 0x8000 00:06:33.095 Nvme2n1 : 5.82 127.21 7.95 0.00 0.00 907012.79 52832.10 1051802.39 00:06:33.095 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x8000 length 0x8000 00:06:33.095 Nvme2n1 : 5.95 90.36 5.65 0.00 0.00 1277540.83 130668.70 1200216.22 00:06:33.095 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x0 length 0x8000 00:06:33.095 Nvme2n2 : 5.82 131.98 8.25 0.00 0.00 852436.41 80256.39 1084066.26 00:06:33.095 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x8000 length 0x8000 00:06:33.095 Nvme2n2 : 6.00 95.94 6.00 0.00 0.00 1178010.48 46984.27 1219574.55 00:06:33.095 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x0 length 0x8000 00:06:33.095 Nvme2n3 : 6.00 145.53 9.10 0.00 0.00 745141.84 36095.21 1122782.92 00:06:33.095 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x8000 length 0x8000 00:06:33.095 Nvme2n3 : 6.02 101.52 6.35 0.00 0.00 1079981.77 5620.97 1245385.65 00:06:33.095 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x0 length 0x2000 00:06:33.095 Nvme3n1 : 6.02 165.63 10.35 0.00 0.00 634347.25 310.35 1155046.79 00:06:33.095 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.095 Verification LBA range: start 0x2000 length 0x2000 00:06:33.095 Nvme3n1 : 6.02 106.27 6.64 0.00 0.00 993372.93 2104.71 1271196.75 00:06:33.095 [2024-10-25T17:47:51.530Z] =================================================================================================================== 00:06:33.095 [2024-10-25T17:47:51.530Z] Total : 1380.22 86.26 0.00 0.00 990159.76 310.35 1271196.75 00:06:34.483 00:06:34.483 real 0m8.567s 00:06:34.483 user 0m16.091s 00:06:34.483 sys 0m0.304s 00:06:34.483 ************************************ 00:06:34.483 17:47:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.483 17:47:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:34.483 END TEST bdev_verify_big_io 00:06:34.483 ************************************ 00:06:34.483 17:47:52 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:34.483 17:47:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:34.483 17:47:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.483 17:47:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.745 ************************************ 00:06:34.745 START TEST bdev_write_zeroes 00:06:34.745 ************************************ 00:06:34.745 17:47:52 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:34.745 [2024-10-25 17:47:52.995488] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:34.745 [2024-10-25 17:47:52.995613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60499 ] 00:06:34.745 [2024-10-25 17:47:53.151126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.007 [2024-10-25 17:47:53.254064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.578 Running I/O for 1 seconds... 00:06:36.518 54848.00 IOPS, 214.25 MiB/s 00:06:36.518 Latency(us) 00:06:36.518 [2024-10-25T17:47:54.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.518 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:36.518 Nvme0n1 : 1.02 9182.86 35.87 0.00 0.00 13911.45 4940.41 25609.45 00:06:36.518 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:36.519 Nvme1n1 : 1.02 9172.25 35.83 0.00 0.00 13911.13 9981.64 21778.12 00:06:36.519 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:36.519 Nvme2n1 : 1.02 9161.85 35.79 0.00 0.00 13890.11 9880.81 21878.94 00:06:36.519 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:36.519 Nvme2n2 : 1.02 9151.35 35.75 0.00 0.00 13863.01 7763.50 21979.77 00:06:36.519 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:36.519 Nvme2n3 : 1.02 9140.90 35.71 0.00 0.00 13857.53 7813.91 21878.94 00:06:36.519 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:36.519 Nvme3n1 : 1.02 9068.03 35.42 0.00 0.00 13954.74 8469.27 21778.12 00:06:36.519 [2024-10-25T17:47:54.954Z] =================================================================================================================== 00:06:36.519 [2024-10-25T17:47:54.954Z] Total : 54877.25 214.36 0.00 0.00 13897.93 4940.41 25609.45 00:06:37.466 00:06:37.466 real 0m2.682s 00:06:37.466 user 0m2.355s 00:06:37.466 sys 0m0.211s 00:06:37.466 17:47:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.466 ************************************ 00:06:37.466 END TEST bdev_write_zeroes 00:06:37.466 ************************************ 00:06:37.466 17:47:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:37.466 17:47:55 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.466 17:47:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:37.466 17:47:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.466 17:47:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.466 ************************************ 00:06:37.466 START TEST bdev_json_nonenclosed 00:06:37.466 ************************************ 00:06:37.466 17:47:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.466 [2024-10-25 17:47:55.753367] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:37.466 [2024-10-25 17:47:55.753481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:06:37.727 [2024-10-25 17:47:55.907154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.727 [2024-10-25 17:47:56.005440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.727 [2024-10-25 17:47:56.005517] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:37.727 [2024-10-25 17:47:56.005534] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:37.727 [2024-10-25 17:47:56.005543] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.988 00:06:37.988 real 0m0.495s 00:06:37.988 user 0m0.303s 00:06:37.988 sys 0m0.088s 00:06:37.988 17:47:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.988 ************************************ 00:06:37.988 END TEST bdev_json_nonenclosed 00:06:37.988 ************************************ 00:06:37.988 17:47:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:37.989 17:47:56 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.989 17:47:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:37.989 17:47:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.989 17:47:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.989 ************************************ 00:06:37.989 START TEST bdev_json_nonarray 00:06:37.989 ************************************ 00:06:37.989 17:47:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.989 [2024-10-25 17:47:56.314808] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:37.989 [2024-10-25 17:47:56.314963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60572 ] 00:06:38.250 [2024-10-25 17:47:56.472256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.250 [2024-10-25 17:47:56.571275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.250 [2024-10-25 17:47:56.571348] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:38.250 [2024-10-25 17:47:56.571365] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:38.250 [2024-10-25 17:47:56.571374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.512 00:06:38.512 real 0m0.500s 00:06:38.512 user 0m0.299s 00:06:38.512 sys 0m0.096s 00:06:38.512 17:47:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.512 ************************************ 00:06:38.512 END TEST bdev_json_nonarray 00:06:38.512 ************************************ 00:06:38.512 17:47:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:38.512 17:47:56 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:38.512 00:06:38.512 real 0m36.243s 00:06:38.512 user 0m56.160s 00:06:38.512 sys 0m5.047s 00:06:38.512 17:47:56 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.512 ************************************ 00:06:38.512 END TEST blockdev_nvme 00:06:38.512 ************************************ 00:06:38.512 17:47:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 17:47:56 -- spdk/autotest.sh@209 -- # uname -s 00:06:38.512 17:47:56 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:38.512 17:47:56 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:38.512 17:47:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.512 17:47:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.512 17:47:56 -- common/autotest_common.sh@10 -- # set +x 00:06:38.512 ************************************ 00:06:38.512 START TEST blockdev_nvme_gpt 00:06:38.512 ************************************ 00:06:38.512 17:47:56 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:38.512 * Looking for test storage... 00:06:38.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:38.775 17:47:56 blockdev_nvme_gpt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:38.775 17:47:56 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:38.775 17:47:56 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lcov --version 00:06:38.775 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.775 17:47:57 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:38.775 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:38.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.775 --rc genhtml_branch_coverage=1 00:06:38.775 --rc genhtml_function_coverage=1 00:06:38.775 --rc genhtml_legend=1 00:06:38.775 --rc geninfo_all_blocks=1 00:06:38.775 --rc geninfo_unexecuted_blocks=1 00:06:38.775 00:06:38.775 ' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:38.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.775 --rc genhtml_branch_coverage=1 00:06:38.775 --rc genhtml_function_coverage=1 00:06:38.775 --rc genhtml_legend=1 00:06:38.775 --rc geninfo_all_blocks=1 00:06:38.775 --rc geninfo_unexecuted_blocks=1 00:06:38.775 00:06:38.775 ' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:38.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.775 --rc genhtml_branch_coverage=1 00:06:38.775 --rc genhtml_function_coverage=1 00:06:38.775 --rc genhtml_legend=1 00:06:38.775 --rc geninfo_all_blocks=1 00:06:38.775 --rc geninfo_unexecuted_blocks=1 00:06:38.775 00:06:38.775 ' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:38.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.775 --rc genhtml_branch_coverage=1 00:06:38.775 --rc genhtml_function_coverage=1 00:06:38.775 --rc genhtml_legend=1 00:06:38.775 --rc geninfo_all_blocks=1 00:06:38.775 --rc geninfo_unexecuted_blocks=1 00:06:38.775 00:06:38.775 ' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:38.775 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60658 00:06:38.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60658 00:06:38.776 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 60658 ']' 00:06:38.776 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.776 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.776 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.776 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.776 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.776 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:38.776 [2024-10-25 17:47:57.101906] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:38.776 [2024-10-25 17:47:57.102029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60658 ] 00:06:39.038 [2024-10-25 17:47:57.259385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.038 [2024-10-25 17:47:57.358652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.611 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.611 17:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:39.611 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:39.611 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:39.611 17:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.133 Waiting for block devices as requested 00:06:40.133 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.134 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.395 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.395 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.689 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1654 -- # local nvme bdf 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:45.689 BYT; 00:06:45.689 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:45.689 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:45.689 BYT; 00:06:45.689 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:45.690 17:48:03 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:45.690 17:48:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:46.633 The operation has completed successfully. 00:06:46.633 17:48:04 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:47.577 The operation has completed successfully. 00:06:47.577 17:48:05 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.726 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.726 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.726 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.726 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.726 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:48.726 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.726 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.987 [] 00:06:48.987 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.987 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:48.987 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:48.987 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:48.987 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:48.987 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:48.987 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.987 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.250 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:49.250 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:49.251 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "623cb880-8f98-4578-a9b6-dc993cd06228"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "623cb880-8f98-4578-a9b6-dc993cd06228",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8f91a7ad-a1f1-4690-890d-de0f923d28cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f91a7ad-a1f1-4690-890d-de0f923d28cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1a187e38-038a-49f3-a007-e3af4110a9c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a187e38-038a-49f3-a007-e3af4110a9c7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cfaecad6-524a-4422-86a8-7507033655a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cfaecad6-524a-4422-86a8-7507033655a0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "eb50e4f6-5fb8-46eb-a1d7-d54087c93685"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "eb50e4f6-5fb8-46eb-a1d7-d54087c93685",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:49.251 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:49.251 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:49.251 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:49.251 17:48:07 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60658 00:06:49.251 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 60658 ']' 00:06:49.251 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 60658 00:06:49.251 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:49.251 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.512 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60658 00:06:49.512 killing process with pid 60658 00:06:49.512 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.512 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.512 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60658' 00:06:49.512 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 60658 00:06:49.512 17:48:07 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 60658 00:06:51.430 17:48:09 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:51.430 17:48:09 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.430 17:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:51.430 17:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.430 17:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.430 ************************************ 00:06:51.430 START TEST bdev_hello_world 00:06:51.430 ************************************ 00:06:51.430 17:48:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.430 [2024-10-25 17:48:09.443836] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:51.430 [2024-10-25 17:48:09.443989] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:06:51.430 [2024-10-25 17:48:09.601165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.430 [2024-10-25 17:48:09.723806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.003 [2024-10-25 17:48:10.311226] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:52.003 [2024-10-25 17:48:10.311299] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:52.003 [2024-10-25 17:48:10.311323] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:52.003 [2024-10-25 17:48:10.314054] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:52.003 [2024-10-25 17:48:10.314997] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:52.003 [2024-10-25 17:48:10.315040] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:52.003 [2024-10-25 17:48:10.315771] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:52.003 00:06:52.003 [2024-10-25 17:48:10.315807] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:52.946 00:06:52.946 real 0m1.731s 00:06:52.946 user 0m1.386s 00:06:52.946 sys 0m0.233s 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.946 ************************************ 00:06:52.946 END TEST bdev_hello_world 00:06:52.946 ************************************ 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:52.946 17:48:11 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:52.946 17:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.946 17:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.946 17:48:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.946 ************************************ 00:06:52.946 START TEST bdev_bounds 00:06:52.946 ************************************ 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61325 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.946 Process bdevio pid: 61325 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61325' 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61325 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61325 ']' 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.946 17:48:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.946 [2024-10-25 17:48:11.248903] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:52.946 [2024-10-25 17:48:11.249055] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61325 ] 00:06:53.207 [2024-10-25 17:48:11.412717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.207 [2024-10-25 17:48:11.540673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.207 [2024-10-25 17:48:11.541370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.207 [2024-10-25 17:48:11.541487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.778 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.778 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:53.778 17:48:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:54.039 I/O targets: 00:06:54.039 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:54.039 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:54.039 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:54.039 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.039 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.039 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.039 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:54.039 00:06:54.039 00:06:54.039 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.039 http://cunit.sourceforge.net/ 00:06:54.039 00:06:54.039 00:06:54.039 Suite: bdevio tests on: Nvme3n1 00:06:54.039 Test: blockdev write read block ...passed 00:06:54.039 Test: blockdev write zeroes read block ...passed 00:06:54.039 Test: blockdev write zeroes read no split ...passed 00:06:54.039 Test: blockdev write zeroes read split ...passed 00:06:54.039 Test: blockdev write zeroes read split partial ...passed 00:06:54.040 Test: blockdev reset ...[2024-10-25 17:48:12.321952] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:54.040 [2024-10-25 17:48:12.326959] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:54.040 passed 00:06:54.040 Test: blockdev write read 8 blocks ...passed 00:06:54.040 Test: blockdev write read size > 128k ...passed 00:06:54.040 Test: blockdev write read invalid size ...passed 00:06:54.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.040 Test: blockdev write read max offset ...passed 00:06:54.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.040 Test: blockdev writev readv 8 blocks ...passed 00:06:54.040 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.040 Test: blockdev writev readv block ...passed 00:06:54.040 Test: blockdev writev readv size > 128k ...passed 00:06:54.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.040 Test: blockdev comparev and writev ...[2024-10-25 17:48:12.348429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ab404000 len:0x1000 00:06:54.040 [2024-10-25 17:48:12.348728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.040 passed 00:06:54.040 Test: blockdev nvme passthru rw ...passed 00:06:54.040 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:48:12.351298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.040 [2024-10-25 17:48:12.351507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.040 passed 00:06:54.040 Test: blockdev nvme admin passthru ...passed 00:06:54.040 Test: blockdev copy ...passed 00:06:54.040 Suite: bdevio tests on: Nvme2n3 00:06:54.040 Test: blockdev write read block ...passed 00:06:54.040 Test: blockdev write zeroes read block ...passed 00:06:54.040 Test: blockdev write zeroes read no split ...passed 00:06:54.040 Test: blockdev write zeroes read split ...passed 00:06:54.040 Test: blockdev write zeroes read split partial ...passed 00:06:54.040 Test: blockdev reset ...[2024-10-25 17:48:12.409375] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:54.040 [2024-10-25 17:48:12.413535] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:54.040 passed 00:06:54.040 Test: blockdev write read 8 blocks ...passed 00:06:54.040 Test: blockdev write read size > 128k ...passed 00:06:54.040 Test: blockdev write read invalid size ...passed 00:06:54.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.040 Test: blockdev write read max offset ...passed 00:06:54.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.040 Test: blockdev writev readv 8 blocks ...passed 00:06:54.040 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.040 Test: blockdev writev readv block ...passed 00:06:54.040 Test: blockdev writev readv size > 128k ...passed 00:06:54.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.040 Test: blockdev comparev and writev ...[2024-10-25 17:48:12.434763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ab402000 len:0x1000 00:06:54.040 [2024-10-25 17:48:12.434923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.040 passed 00:06:54.040 Test: blockdev nvme passthru rw ...passed 00:06:54.040 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:48:12.437435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.040 [2024-10-25 17:48:12.437548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.040 passed 00:06:54.040 Test: blockdev nvme admin passthru ...passed 00:06:54.040 Test: blockdev copy ...passed 00:06:54.040 Suite: bdevio tests on: Nvme2n2 00:06:54.040 Test: blockdev write read block ...passed 00:06:54.040 Test: blockdev write zeroes read block ...passed 00:06:54.040 Test: blockdev write zeroes read no split ...passed 00:06:54.347 Test: blockdev write zeroes read split ...passed 00:06:54.347 Test: blockdev write zeroes read split partial ...passed 00:06:54.347 Test: blockdev reset ...[2024-10-25 17:48:12.505695] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:54.347 [2024-10-25 17:48:12.510964] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:54.347 passed 00:06:54.347 Test: blockdev write read 8 blocks ...passed 00:06:54.347 Test: blockdev write read size > 128k ...passed 00:06:54.347 Test: blockdev write read invalid size ...passed 00:06:54.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.347 Test: blockdev write read max offset ...passed 00:06:54.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.347 Test: blockdev writev readv 8 blocks ...passed 00:06:54.347 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.347 Test: blockdev writev readv block ...passed 00:06:54.347 Test: blockdev writev readv size > 128k ...passed 00:06:54.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.347 Test: blockdev comparev and writev ...[2024-10-25 17:48:12.530899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0838000 len:0x1000 00:06:54.347 [2024-10-25 17:48:12.531031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.347 passed 00:06:54.347 Test: blockdev nvme passthru rw ...passed 00:06:54.347 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:48:12.533540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.347 [2024-10-25 17:48:12.533648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.347 passed 00:06:54.347 Test: blockdev nvme admin passthru ...passed 00:06:54.347 Test: blockdev copy ...passed 00:06:54.347 Suite: bdevio tests on: Nvme2n1 00:06:54.347 Test: blockdev write read block ...passed 00:06:54.347 Test: blockdev write zeroes read block ...passed 00:06:54.347 Test: blockdev write zeroes read no split ...passed 00:06:54.347 Test: blockdev write zeroes read split ...passed 00:06:54.347 Test: blockdev write zeroes read split partial ...passed 00:06:54.347 Test: blockdev reset ...[2024-10-25 17:48:12.595057] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:54.347 [2024-10-25 17:48:12.599764] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:54.347 passed 00:06:54.347 Test: blockdev write read 8 blocks ...passed 00:06:54.347 Test: blockdev write read size > 128k ...passed 00:06:54.347 Test: blockdev write read invalid size ...passed 00:06:54.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.347 Test: blockdev write read max offset ...passed 00:06:54.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.347 Test: blockdev writev readv 8 blocks ...passed 00:06:54.347 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.347 Test: blockdev writev readv block ...passed 00:06:54.347 Test: blockdev writev readv size > 128k ...passed 00:06:54.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.347 Test: blockdev comparev and writev ...[2024-10-25 17:48:12.619327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0834000 len:0x1000 00:06:54.347 [2024-10-25 17:48:12.619507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.347 passed 00:06:54.347 Test: blockdev nvme passthru rw ...passed 00:06:54.347 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:48:12.622712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.347 passed 00:06:54.347 Test: blockdev nvme admin passthru ...[2024-10-25 17:48:12.622853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.347 passed 00:06:54.347 Test: blockdev copy ...passed 00:06:54.347 Suite: bdevio tests on: Nvme1n1p2 00:06:54.347 Test: blockdev write read block ...passed 00:06:54.347 Test: blockdev write zeroes read block ...passed 00:06:54.347 Test: blockdev write zeroes read no split ...passed 00:06:54.347 Test: blockdev write zeroes read split ...passed 00:06:54.347 Test: blockdev write zeroes read split partial ...passed 00:06:54.347 Test: blockdev reset ...[2024-10-25 17:48:12.685716] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:54.347 [2024-10-25 17:48:12.690631] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:54.347 passed 00:06:54.347 Test: blockdev write read 8 blocks ...passed 00:06:54.347 Test: blockdev write read size > 128k ...passed 00:06:54.347 Test: blockdev write read invalid size ...passed 00:06:54.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.347 Test: blockdev write read max offset ...passed 00:06:54.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.347 Test: blockdev writev readv 8 blocks ...passed 00:06:54.347 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.347 Test: blockdev writev readv block ...passed 00:06:54.347 Test: blockdev writev readv size > 128k ...passed 00:06:54.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.347 Test: blockdev comparev and writev ...[2024-10-25 17:48:12.712814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d0830000 len:0x1000 00:06:54.347 [2024-10-25 17:48:12.712975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.347 passed 00:06:54.347 Test: blockdev nvme passthru rw ...passed 00:06:54.347 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.347 Test: blockdev nvme admin passthru ...passed 00:06:54.347 Test: blockdev copy ...passed 00:06:54.347 Suite: bdevio tests on: Nvme1n1p1 00:06:54.347 Test: blockdev write read block ...passed 00:06:54.347 Test: blockdev write zeroes read block ...passed 00:06:54.347 Test: blockdev write zeroes read no split ...passed 00:06:54.347 Test: blockdev write zeroes read split ...passed 00:06:54.610 Test: blockdev write zeroes read split partial ...passed 00:06:54.610 Test: blockdev reset ...[2024-10-25 17:48:12.772668] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:54.610 [2024-10-25 17:48:12.776961] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:54.610 passed 00:06:54.610 Test: blockdev write read 8 blocks ...passed 00:06:54.610 Test: blockdev write read size > 128k ...passed 00:06:54.610 Test: blockdev write read invalid size ...passed 00:06:54.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.610 Test: blockdev write read max offset ...passed 00:06:54.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.610 Test: blockdev writev readv 8 blocks ...passed 00:06:54.610 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.610 Test: blockdev writev readv block ...passed 00:06:54.610 Test: blockdev writev readv size > 128k ...passed 00:06:54.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.610 Test: blockdev comparev and writev ...[2024-10-25 17:48:12.798150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2abe0e000 len:0x1000 00:06:54.610 [2024-10-25 17:48:12.798363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.610 passed 00:06:54.610 Test: blockdev nvme passthru rw ...passed 00:06:54.610 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.610 Test: blockdev nvme admin passthru ...passed 00:06:54.610 Test: blockdev copy ...passed 00:06:54.610 Suite: bdevio tests on: Nvme0n1 00:06:54.610 Test: blockdev write read block ...passed 00:06:54.610 Test: blockdev write zeroes read block ...passed 00:06:54.610 Test: blockdev write zeroes read no split ...passed 00:06:54.610 Test: blockdev write zeroes read split ...passed 00:06:54.610 Test: blockdev write zeroes read split partial ...passed 00:06:54.610 Test: blockdev reset ...[2024-10-25 17:48:12.859775] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:54.610 [2024-10-25 17:48:12.863199] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:54.610 passed 00:06:54.610 Test: blockdev write read 8 blocks ...passed 00:06:54.610 Test: blockdev write read size > 128k ...passed 00:06:54.610 Test: blockdev write read invalid size ...passed 00:06:54.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.610 Test: blockdev write read max offset ...passed 00:06:54.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.610 Test: blockdev writev readv 8 blocks ...passed 00:06:54.610 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.610 Test: blockdev writev readv block ...passed 00:06:54.610 Test: blockdev writev readv size > 128k ...passed 00:06:54.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.610 Test: blockdev comparev and writev ...passed 00:06:54.610 Test: blockdev nvme passthru rw ...[2024-10-25 17:48:12.880983] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:54.610 separate metadata which is not supported yet. 00:06:54.610 passed 00:06:54.610 Test: blockdev nvme passthru vendor specific ...[2024-10-25 17:48:12.882762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:54.610 [2024-10-25 17:48:12.882876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:54.610 passed 00:06:54.610 Test: blockdev nvme admin passthru ...passed 00:06:54.610 Test: blockdev copy ...passed 00:06:54.610 00:06:54.610 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.610 suites 7 7 n/a 0 0 00:06:54.610 tests 161 161 161 0 0 00:06:54.610 asserts 1025 1025 1025 0 n/a 00:06:54.610 00:06:54.610 Elapsed time = 1.601 seconds 00:06:54.610 0 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61325 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61325 ']' 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61325 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61325 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.610 killing process with pid 61325 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61325' 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61325 00:06:54.610 17:48:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61325 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:55.556 00:06:55.556 real 0m2.510s 00:06:55.556 user 0m6.215s 00:06:55.556 sys 0m0.387s 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:55.556 ************************************ 00:06:55.556 END TEST bdev_bounds 00:06:55.556 ************************************ 00:06:55.556 17:48:13 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:55.556 17:48:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:55.556 17:48:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.556 17:48:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.556 ************************************ 00:06:55.556 START TEST bdev_nbd 00:06:55.556 ************************************ 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61384 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61384 /var/tmp/spdk-nbd.sock 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61384 ']' 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:55.556 17:48:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:55.556 [2024-10-25 17:48:13.831697] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:06:55.556 [2024-10-25 17:48:13.831849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.816 [2024-10-25 17:48:13.994507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.816 [2024-10-25 17:48:14.116800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.387 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.650 1+0 records in 00:06:56.650 1+0 records out 00:06:56.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00307028 s, 1.3 MB/s 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.650 17:48:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.911 1+0 records in 00:06:56.911 1+0 records out 00:06:56.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071259 s, 5.7 MB/s 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.911 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.172 1+0 records in 00:06:57.172 1+0 records out 00:06:57.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105991 s, 3.9 MB/s 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.172 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.433 1+0 records in 00:06:57.433 1+0 records out 00:06:57.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636291 s, 6.4 MB/s 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.433 1+0 records in 00:06:57.433 1+0 records out 00:06:57.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113637 s, 3.6 MB/s 00:06:57.433 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.694 17:48:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.694 1+0 records in 00:06:57.694 1+0 records out 00:06:57.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785499 s, 5.2 MB/s 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.694 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.954 1+0 records in 00:06:57.954 1+0 records out 00:06:57.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798182 s, 5.1 MB/s 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.954 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd0", 00:06:58.213 "bdev_name": "Nvme0n1" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd1", 00:06:58.213 "bdev_name": "Nvme1n1p1" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd2", 00:06:58.213 "bdev_name": "Nvme1n1p2" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd3", 00:06:58.213 "bdev_name": "Nvme2n1" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd4", 00:06:58.213 "bdev_name": "Nvme2n2" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd5", 00:06:58.213 "bdev_name": "Nvme2n3" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd6", 00:06:58.213 "bdev_name": "Nvme3n1" 00:06:58.213 } 00:06:58.213 ]' 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd0", 00:06:58.213 "bdev_name": "Nvme0n1" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd1", 00:06:58.213 "bdev_name": "Nvme1n1p1" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd2", 00:06:58.213 "bdev_name": "Nvme1n1p2" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd3", 00:06:58.213 "bdev_name": "Nvme2n1" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd4", 00:06:58.213 "bdev_name": "Nvme2n2" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd5", 00:06:58.213 "bdev_name": "Nvme2n3" 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "nbd_device": "/dev/nbd6", 00:06:58.213 "bdev_name": "Nvme3n1" 00:06:58.213 } 00:06:58.213 ]' 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.213 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.474 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.737 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.737 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.737 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.737 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.737 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.737 17:48:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.737 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.737 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.737 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.737 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.999 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.261 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.522 17:48:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.783 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.045 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:00.308 /dev/nbd0 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.308 1+0 records in 00:07:00.308 1+0 records out 00:07:00.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00416462 s, 984 kB/s 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:00.308 /dev/nbd1 00:07:00.308 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.571 1+0 records in 00:07:00.571 1+0 records out 00:07:00.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918215 s, 4.5 MB/s 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:00.571 /dev/nbd10 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.571 17:48:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:00.571 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.571 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.571 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.571 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.834 1+0 records in 00:07:00.834 1+0 records out 00:07:00.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000926494 s, 4.4 MB/s 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:00.834 /dev/nbd11 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.834 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.834 1+0 records in 00:07:00.834 1+0 records out 00:07:00.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102274 s, 4.0 MB/s 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.835 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:01.096 /dev/nbd12 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.096 1+0 records in 00:07:01.096 1+0 records out 00:07:01.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816144 s, 5.0 MB/s 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.096 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:01.358 /dev/nbd13 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.358 1+0 records in 00:07:01.358 1+0 records out 00:07:01.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122306 s, 3.3 MB/s 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.358 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:01.619 /dev/nbd14 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.619 17:48:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.619 1+0 records in 00:07:01.619 1+0 records out 00:07:01.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110615 s, 3.7 MB/s 00:07:01.619 17:48:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.619 17:48:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.619 17:48:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.619 17:48:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.620 17:48:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.620 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.620 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.620 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.620 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.620 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd0", 00:07:01.881 "bdev_name": "Nvme0n1" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd1", 00:07:01.881 "bdev_name": "Nvme1n1p1" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd10", 00:07:01.881 "bdev_name": "Nvme1n1p2" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd11", 00:07:01.881 "bdev_name": "Nvme2n1" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd12", 00:07:01.881 "bdev_name": "Nvme2n2" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd13", 00:07:01.881 "bdev_name": "Nvme2n3" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd14", 00:07:01.881 "bdev_name": "Nvme3n1" 00:07:01.881 } 00:07:01.881 ]' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd0", 00:07:01.881 "bdev_name": "Nvme0n1" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd1", 00:07:01.881 "bdev_name": "Nvme1n1p1" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd10", 00:07:01.881 "bdev_name": "Nvme1n1p2" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd11", 00:07:01.881 "bdev_name": "Nvme2n1" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd12", 00:07:01.881 "bdev_name": "Nvme2n2" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd13", 00:07:01.881 "bdev_name": "Nvme2n3" 00:07:01.881 }, 00:07:01.881 { 00:07:01.881 "nbd_device": "/dev/nbd14", 00:07:01.881 "bdev_name": "Nvme3n1" 00:07:01.881 } 00:07:01.881 ]' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.881 /dev/nbd1 00:07:01.881 /dev/nbd10 00:07:01.881 /dev/nbd11 00:07:01.881 /dev/nbd12 00:07:01.881 /dev/nbd13 00:07:01.881 /dev/nbd14' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.881 /dev/nbd1 00:07:01.881 /dev/nbd10 00:07:01.881 /dev/nbd11 00:07:01.881 /dev/nbd12 00:07:01.881 /dev/nbd13 00:07:01.881 /dev/nbd14' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:01.881 256+0 records in 00:07:01.881 256+0 records out 00:07:01.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00735382 s, 143 MB/s 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.881 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.144 256+0 records in 00:07:02.144 256+0 records out 00:07:02.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250575 s, 4.2 MB/s 00:07:02.144 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.144 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.405 256+0 records in 00:07:02.405 256+0 records out 00:07:02.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257327 s, 4.1 MB/s 00:07:02.405 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.405 17:48:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:02.666 256+0 records in 00:07:02.666 256+0 records out 00:07:02.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257665 s, 4.1 MB/s 00:07:02.666 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.666 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:02.928 256+0 records in 00:07:02.928 256+0 records out 00:07:02.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191804 s, 5.5 MB/s 00:07:02.928 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.928 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:02.928 256+0 records in 00:07:02.928 256+0 records out 00:07:02.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0659255 s, 15.9 MB/s 00:07:02.928 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.928 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:03.189 256+0 records in 00:07:03.189 256+0 records out 00:07:03.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171303 s, 6.1 MB/s 00:07:03.189 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.189 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:03.189 256+0 records in 00:07:03.189 256+0 records out 00:07:03.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121448 s, 8.6 MB/s 00:07:03.189 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:03.189 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.189 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.189 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.190 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.190 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.190 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.190 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.190 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:03.190 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.451 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.710 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.710 17:48:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.710 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.032 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.301 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.563 17:48:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.825 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:05.085 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:05.343 malloc_lvol_verify 00:07:05.343 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:05.343 b5257dec-f8d8-473a-8795-3276dd0244e6 00:07:05.343 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:05.602 1b0a7e06-f8fb-4e1e-b8fd-ef294211e826 00:07:05.602 17:48:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:05.861 /dev/nbd0 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:05.861 mke2fs 1.47.0 (5-Feb-2023) 00:07:05.861 Discarding device blocks: 0/4096 done 00:07:05.861 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:05.861 00:07:05.861 Allocating group tables: 0/1 done 00:07:05.861 Writing inode tables: 0/1 done 00:07:05.861 Creating journal (1024 blocks): done 00:07:05.861 Writing superblocks and filesystem accounting information: 0/1 done 00:07:05.861 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.861 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.119 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61384 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61384 ']' 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61384 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61384 00:07:06.120 killing process with pid 61384 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61384' 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61384 00:07:06.120 17:48:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61384 00:07:07.054 ************************************ 00:07:07.054 END TEST bdev_nbd 00:07:07.054 ************************************ 00:07:07.054 17:48:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:07.054 00:07:07.054 real 0m11.427s 00:07:07.054 user 0m15.760s 00:07:07.054 sys 0m3.689s 00:07:07.054 17:48:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.054 17:48:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.054 17:48:25 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:07.054 17:48:25 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:07.054 17:48:25 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:07.054 skipping fio tests on NVMe due to multi-ns failures. 00:07:07.054 17:48:25 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:07.054 17:48:25 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.054 17:48:25 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.054 17:48:25 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:07.054 17:48:25 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.054 17:48:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.054 ************************************ 00:07:07.054 START TEST bdev_verify 00:07:07.054 ************************************ 00:07:07.054 17:48:25 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.054 [2024-10-25 17:48:25.281566] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:07.054 [2024-10-25 17:48:25.281678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 00:07:07.054 [2024-10-25 17:48:25.451094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.313 [2024-10-25 17:48:25.544600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.313 [2024-10-25 17:48:25.544615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.879 Running I/O for 5 seconds... 00:07:10.188 20992.00 IOPS, 82.00 MiB/s [2024-10-25T17:48:29.556Z] 23008.00 IOPS, 89.88 MiB/s [2024-10-25T17:48:30.491Z] 24320.00 IOPS, 95.00 MiB/s [2024-10-25T17:48:31.424Z] 24432.00 IOPS, 95.44 MiB/s [2024-10-25T17:48:31.424Z] 24563.20 IOPS, 95.95 MiB/s 00:07:12.989 Latency(us) 00:07:12.989 [2024-10-25T17:48:31.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.989 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0xbd0bd 00:07:12.989 Nvme0n1 : 5.06 1757.96 6.87 0.00 0.00 72478.44 8670.92 78239.90 00:07:12.989 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:12.989 Nvme0n1 : 5.06 1696.44 6.63 0.00 0.00 74983.08 15728.64 67754.14 00:07:12.989 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0x4ff80 00:07:12.989 Nvme1n1p1 : 5.08 1765.42 6.90 0.00 0.00 72254.48 12098.95 69367.34 00:07:12.989 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:12.989 Nvme1n1p1 : 5.06 1695.92 6.62 0.00 0.00 74879.28 14720.39 64931.05 00:07:12.989 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0x4ff7f 00:07:12.989 Nvme1n1p2 : 5.06 1756.98 6.86 0.00 0.00 72306.91 8620.50 66544.25 00:07:12.989 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:12.989 Nvme1n1p2 : 5.06 1695.42 6.62 0.00 0.00 74743.04 14014.62 64124.46 00:07:12.989 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0x80000 00:07:12.989 Nvme2n1 : 5.08 1764.34 6.89 0.00 0.00 72011.36 12351.02 62107.96 00:07:12.989 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x80000 length 0x80000 00:07:12.989 Nvme2n1 : 5.08 1711.74 6.69 0.00 0.00 73991.02 8318.03 65334.35 00:07:12.989 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0x80000 00:07:12.989 Nvme2n2 : 5.08 1763.88 6.89 0.00 0.00 71893.35 12603.08 62511.26 00:07:12.989 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x80000 length 0x80000 00:07:12.989 Nvme2n2 : 5.09 1711.30 6.68 0.00 0.00 73870.23 8620.50 68964.04 00:07:12.989 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0x80000 00:07:12.989 Nvme2n3 : 5.08 1763.42 6.89 0.00 0.00 71766.08 12703.90 67350.84 00:07:12.989 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x80000 length 0x80000 00:07:12.989 Nvme2n3 : 5.09 1710.82 6.68 0.00 0.00 73771.06 9023.80 70980.53 00:07:12.989 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x0 length 0x20000 00:07:12.989 Nvme3n1 : 5.08 1762.84 6.89 0.00 0.00 71635.56 10687.41 70577.23 00:07:12.989 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.989 Verification LBA range: start 0x20000 length 0x20000 00:07:12.989 Nvme3n1 : 5.05 1696.95 6.63 0.00 0.00 75182.62 15930.29 80256.39 00:07:12.989 [2024-10-25T17:48:31.424Z] =================================================================================================================== 00:07:12.989 [2024-10-25T17:48:31.424Z] Total : 24253.42 94.74 0.00 0.00 73245.20 8318.03 80256.39 00:07:13.982 00:07:13.982 real 0m6.890s 00:07:13.982 user 0m12.905s 00:07:13.982 sys 0m0.196s 00:07:13.982 17:48:32 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.982 17:48:32 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.982 ************************************ 00:07:13.982 END TEST bdev_verify 00:07:13.982 ************************************ 00:07:13.982 17:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.982 17:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:13.982 17:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.982 17:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.982 ************************************ 00:07:13.982 START TEST bdev_verify_big_io 00:07:13.982 ************************************ 00:07:13.982 17:48:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.982 [2024-10-25 17:48:32.207406] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:13.982 [2024-10-25 17:48:32.207513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61897 ] 00:07:13.982 [2024-10-25 17:48:32.367112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.241 [2024-10-25 17:48:32.461391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.241 [2024-10-25 17:48:32.461468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.806 Running I/O for 5 seconds... 00:07:19.920 1840.00 IOPS, 115.00 MiB/s [2024-10-25T17:48:39.290Z] 2766.50 IOPS, 172.91 MiB/s [2024-10-25T17:48:39.856Z] 2980.67 IOPS, 186.29 MiB/s [2024-10-25T17:48:39.856Z] 2764.00 IOPS, 172.75 MiB/s 00:07:21.421 Latency(us) 00:07:21.421 [2024-10-25T17:48:39.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x0 length 0xbd0b 00:07:21.421 Nvme0n1 : 5.74 112.18 7.01 0.00 0.00 1080192.74 15022.87 1187310.67 00:07:21.421 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:21.421 Nvme0n1 : 5.86 98.34 6.15 0.00 0.00 1221341.65 12149.37 1832588.21 00:07:21.421 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x0 length 0x4ff8 00:07:21.421 Nvme1n1p1 : 5.74 116.49 7.28 0.00 0.00 1027232.23 97194.93 1013085.74 00:07:21.421 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:21.421 Nvme1n1p1 : 6.01 101.56 6.35 0.00 0.00 1146150.43 98808.12 1639004.95 00:07:21.421 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x0 length 0x4ff7 00:07:21.421 Nvme1n1p2 : 5.82 120.53 7.53 0.00 0.00 968329.67 69367.34 1129235.69 00:07:21.421 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:21.421 Nvme1n1p2 : 6.01 101.20 6.32 0.00 0.00 1109800.42 99614.72 1445421.69 00:07:21.421 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x0 length 0x8000 00:07:21.421 Nvme2n1 : 5.94 124.11 7.76 0.00 0.00 904494.61 42749.64 1148594.02 00:07:21.421 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x8000 length 0x8000 00:07:21.421 Nvme2n1 : 6.09 108.74 6.80 0.00 0.00 1000341.02 76223.41 1200216.22 00:07:21.421 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x0 length 0x8000 00:07:21.421 Nvme2n2 : 5.94 129.29 8.08 0.00 0.00 852292.27 77836.60 1045349.61 00:07:21.421 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x8000 length 0x8000 00:07:21.421 Nvme2n2 : 6.17 117.40 7.34 0.00 0.00 881699.15 29440.79 1555118.87 00:07:21.421 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.421 Verification LBA range: start 0x0 length 0x8000 00:07:21.421 Nvme2n3 : 6.01 138.39 8.65 0.00 0.00 775279.61 28230.89 1058255.16 00:07:21.422 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.422 Verification LBA range: start 0x8000 length 0x8000 00:07:21.422 Nvme2n3 : 6.29 146.26 9.14 0.00 0.00 681786.49 12603.08 2090699.22 00:07:21.422 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.422 Verification LBA range: start 0x0 length 0x2000 00:07:21.422 Nvme3n1 : 6.09 157.64 9.85 0.00 0.00 663778.99 705.77 1077613.49 00:07:21.422 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.422 Verification LBA range: start 0x2000 length 0x2000 00:07:21.422 Nvme3n1 : 6.54 271.24 16.95 0.00 0.00 350543.38 270.97 1677721.60 00:07:21.422 [2024-10-25T17:48:39.857Z] =================================================================================================================== 00:07:21.422 [2024-10-25T17:48:39.857Z] Total : 1843.37 115.21 0.00 0.00 830814.66 270.97 2090699.22 00:07:22.796 00:07:22.796 real 0m9.062s 00:07:22.796 user 0m17.250s 00:07:22.796 sys 0m0.225s 00:07:22.796 17:48:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.796 17:48:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:22.796 ************************************ 00:07:22.796 END TEST bdev_verify_big_io 00:07:22.796 ************************************ 00:07:23.057 17:48:41 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.057 17:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:23.057 17:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.057 17:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.057 ************************************ 00:07:23.057 START TEST bdev_write_zeroes 00:07:23.057 ************************************ 00:07:23.057 17:48:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.057 [2024-10-25 17:48:41.313744] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:23.057 [2024-10-25 17:48:41.313859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62012 ] 00:07:23.057 [2024-10-25 17:48:41.470154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.352 [2024-10-25 17:48:41.547171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.663 Running I/O for 1 seconds... 00:07:25.035 71232.00 IOPS, 278.25 MiB/s 00:07:25.035 Latency(us) 00:07:25.035 [2024-10-25T17:48:43.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.035 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme0n1 : 1.03 10101.07 39.46 0.00 0.00 12644.48 10989.88 24197.91 00:07:25.035 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme1n1p1 : 1.03 10088.66 39.41 0.00 0.00 12640.46 10737.82 23895.43 00:07:25.035 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme1n1p2 : 1.03 10076.35 39.36 0.00 0.00 12629.28 10637.00 23088.84 00:07:25.035 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme2n1 : 1.03 10065.06 39.32 0.00 0.00 12623.95 10889.06 22383.06 00:07:25.035 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme2n2 : 1.03 10053.73 39.27 0.00 0.00 12615.70 9779.99 21979.77 00:07:25.035 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme2n3 : 1.03 10042.50 39.23 0.00 0.00 12581.59 7158.55 22685.54 00:07:25.035 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.035 Nvme3n1 : 1.03 10031.25 39.18 0.00 0.00 12575.40 6452.78 24197.91 00:07:25.035 [2024-10-25T17:48:43.470Z] =================================================================================================================== 00:07:25.035 [2024-10-25T17:48:43.470Z] Total : 70458.62 275.23 0.00 0.00 12615.84 6452.78 24197.91 00:07:25.601 00:07:25.601 real 0m2.602s 00:07:25.601 user 0m2.329s 00:07:25.601 sys 0m0.162s 00:07:25.601 17:48:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.601 ************************************ 00:07:25.601 END TEST bdev_write_zeroes 00:07:25.601 ************************************ 00:07:25.601 17:48:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.601 17:48:43 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.601 17:48:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:25.601 17:48:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.601 17:48:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.601 ************************************ 00:07:25.601 START TEST bdev_json_nonenclosed 00:07:25.601 ************************************ 00:07:25.601 17:48:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.601 [2024-10-25 17:48:43.954977] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:25.601 [2024-10-25 17:48:43.955382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62065 ] 00:07:25.860 [2024-10-25 17:48:44.115180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.860 [2024-10-25 17:48:44.207233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.860 [2024-10-25 17:48:44.207305] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:25.860 [2024-10-25 17:48:44.207322] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.860 [2024-10-25 17:48:44.207331] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.119 00:07:26.119 real 0m0.488s 00:07:26.119 user 0m0.310s 00:07:26.119 sys 0m0.074s 00:07:26.119 17:48:44 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.119 17:48:44 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:26.119 ************************************ 00:07:26.119 END TEST bdev_json_nonenclosed 00:07:26.119 ************************************ 00:07:26.119 17:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.119 17:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:26.119 17:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.119 17:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.119 ************************************ 00:07:26.119 START TEST bdev_json_nonarray 00:07:26.119 ************************************ 00:07:26.119 17:48:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.119 [2024-10-25 17:48:44.479197] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:26.119 [2024-10-25 17:48:44.479311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62085 ] 00:07:26.377 [2024-10-25 17:48:44.640571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.377 [2024-10-25 17:48:44.735265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.378 [2024-10-25 17:48:44.735345] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.378 [2024-10-25 17:48:44.735362] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.378 [2024-10-25 17:48:44.735371] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.636 00:07:26.636 real 0m0.488s 00:07:26.636 user 0m0.301s 00:07:26.636 sys 0m0.083s 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.636 ************************************ 00:07:26.636 END TEST bdev_json_nonarray 00:07:26.636 ************************************ 00:07:26.636 17:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:26.636 17:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:26.636 17:48:44 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:26.636 17:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.636 17:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.636 17:48:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.636 ************************************ 00:07:26.636 START TEST bdev_gpt_uuid 00:07:26.636 ************************************ 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62116 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62116 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62116 ']' 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.636 17:48:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.636 [2024-10-25 17:48:45.020120] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:26.636 [2024-10-25 17:48:45.020246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62116 ] 00:07:26.896 [2024-10-25 17:48:45.177913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.896 [2024-10-25 17:48:45.271601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.462 17:48:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.462 17:48:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:27.462 17:48:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.462 17:48:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.462 17:48:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.028 Some configs were skipped because the RPC state that can call them passed over. 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.028 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:28.028 { 00:07:28.028 "name": "Nvme1n1p1", 00:07:28.028 "aliases": [ 00:07:28.028 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:28.028 ], 00:07:28.028 "product_name": "GPT Disk", 00:07:28.028 "block_size": 4096, 00:07:28.028 "num_blocks": 655104, 00:07:28.028 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.028 "assigned_rate_limits": { 00:07:28.028 "rw_ios_per_sec": 0, 00:07:28.028 "rw_mbytes_per_sec": 0, 00:07:28.028 "r_mbytes_per_sec": 0, 00:07:28.028 "w_mbytes_per_sec": 0 00:07:28.028 }, 00:07:28.028 "claimed": false, 00:07:28.028 "zoned": false, 00:07:28.028 "supported_io_types": { 00:07:28.028 "read": true, 00:07:28.028 "write": true, 00:07:28.028 "unmap": true, 00:07:28.028 "flush": true, 00:07:28.029 "reset": true, 00:07:28.029 "nvme_admin": false, 00:07:28.029 "nvme_io": false, 00:07:28.029 "nvme_io_md": false, 00:07:28.029 "write_zeroes": true, 00:07:28.029 "zcopy": false, 00:07:28.029 "get_zone_info": false, 00:07:28.029 "zone_management": false, 00:07:28.029 "zone_append": false, 00:07:28.029 "compare": true, 00:07:28.029 "compare_and_write": false, 00:07:28.029 "abort": true, 00:07:28.029 "seek_hole": false, 00:07:28.029 "seek_data": false, 00:07:28.029 "copy": true, 00:07:28.029 "nvme_iov_md": false 00:07:28.029 }, 00:07:28.029 "driver_specific": { 00:07:28.029 "gpt": { 00:07:28.029 "base_bdev": "Nvme1n1", 00:07:28.029 "offset_blocks": 256, 00:07:28.029 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:28.029 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.029 "partition_name": "SPDK_TEST_first" 00:07:28.029 } 00:07:28.029 } 00:07:28.029 } 00:07:28.029 ]' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:28.029 { 00:07:28.029 "name": "Nvme1n1p2", 00:07:28.029 "aliases": [ 00:07:28.029 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:28.029 ], 00:07:28.029 "product_name": "GPT Disk", 00:07:28.029 "block_size": 4096, 00:07:28.029 "num_blocks": 655103, 00:07:28.029 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:28.029 "assigned_rate_limits": { 00:07:28.029 "rw_ios_per_sec": 0, 00:07:28.029 "rw_mbytes_per_sec": 0, 00:07:28.029 "r_mbytes_per_sec": 0, 00:07:28.029 "w_mbytes_per_sec": 0 00:07:28.029 }, 00:07:28.029 "claimed": false, 00:07:28.029 "zoned": false, 00:07:28.029 "supported_io_types": { 00:07:28.029 "read": true, 00:07:28.029 "write": true, 00:07:28.029 "unmap": true, 00:07:28.029 "flush": true, 00:07:28.029 "reset": true, 00:07:28.029 "nvme_admin": false, 00:07:28.029 "nvme_io": false, 00:07:28.029 "nvme_io_md": false, 00:07:28.029 "write_zeroes": true, 00:07:28.029 "zcopy": false, 00:07:28.029 "get_zone_info": false, 00:07:28.029 "zone_management": false, 00:07:28.029 "zone_append": false, 00:07:28.029 "compare": true, 00:07:28.029 "compare_and_write": false, 00:07:28.029 "abort": true, 00:07:28.029 "seek_hole": false, 00:07:28.029 "seek_data": false, 00:07:28.029 "copy": true, 00:07:28.029 "nvme_iov_md": false 00:07:28.029 }, 00:07:28.029 "driver_specific": { 00:07:28.029 "gpt": { 00:07:28.029 "base_bdev": "Nvme1n1", 00:07:28.029 "offset_blocks": 655360, 00:07:28.029 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:28.029 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:28.029 "partition_name": "SPDK_TEST_second" 00:07:28.029 } 00:07:28.029 } 00:07:28.029 } 00:07:28.029 ]' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62116 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62116 ']' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62116 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62116 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.029 killing process with pid 62116 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62116' 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62116 00:07:28.029 17:48:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62116 00:07:29.927 00:07:29.927 real 0m3.061s 00:07:29.927 user 0m3.233s 00:07:29.927 sys 0m0.331s 00:07:29.927 17:48:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.927 17:48:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:29.927 ************************************ 00:07:29.927 END TEST bdev_gpt_uuid 00:07:29.927 ************************************ 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:29.927 17:48:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:29.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.185 Waiting for block devices as requested 00:07:30.185 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.185 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.185 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.443 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.761 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:35.761 17:48:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:35.761 17:48:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:35.761 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:35.761 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:35.761 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:35.761 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:35.761 17:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:35.761 00:07:35.761 real 0m57.133s 00:07:35.761 user 1m13.157s 00:07:35.761 sys 0m7.900s 00:07:35.761 17:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.761 17:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.761 ************************************ 00:07:35.761 END TEST blockdev_nvme_gpt 00:07:35.761 ************************************ 00:07:35.761 17:48:54 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:35.761 17:48:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.761 17:48:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.761 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:07:35.761 ************************************ 00:07:35.761 START TEST nvme 00:07:35.761 ************************************ 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:35.761 * Looking for test storage... 00:07:35.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.761 17:48:54 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.761 17:48:54 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.761 17:48:54 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.761 17:48:54 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.761 17:48:54 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.761 17:48:54 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:35.761 17:48:54 nvme -- scripts/common.sh@345 -- # : 1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.761 17:48:54 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.761 17:48:54 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@353 -- # local d=1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.761 17:48:54 nvme -- scripts/common.sh@355 -- # echo 1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.761 17:48:54 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@353 -- # local d=2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.761 17:48:54 nvme -- scripts/common.sh@355 -- # echo 2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.761 17:48:54 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.761 17:48:54 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.761 17:48:54 nvme -- scripts/common.sh@368 -- # return 0 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:35.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.761 --rc genhtml_branch_coverage=1 00:07:35.761 --rc genhtml_function_coverage=1 00:07:35.761 --rc genhtml_legend=1 00:07:35.761 --rc geninfo_all_blocks=1 00:07:35.761 --rc geninfo_unexecuted_blocks=1 00:07:35.761 00:07:35.761 ' 00:07:35.761 17:48:54 nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.762 --rc genhtml_branch_coverage=1 00:07:35.762 --rc genhtml_function_coverage=1 00:07:35.762 --rc genhtml_legend=1 00:07:35.762 --rc geninfo_all_blocks=1 00:07:35.762 --rc geninfo_unexecuted_blocks=1 00:07:35.762 00:07:35.762 ' 00:07:35.762 17:48:54 nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.762 --rc genhtml_branch_coverage=1 00:07:35.762 --rc genhtml_function_coverage=1 00:07:35.762 --rc genhtml_legend=1 00:07:35.762 --rc geninfo_all_blocks=1 00:07:35.762 --rc geninfo_unexecuted_blocks=1 00:07:35.762 00:07:35.762 ' 00:07:35.762 17:48:54 nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.762 --rc genhtml_branch_coverage=1 00:07:35.762 --rc genhtml_function_coverage=1 00:07:35.762 --rc genhtml_legend=1 00:07:35.762 --rc geninfo_all_blocks=1 00:07:35.762 --rc geninfo_unexecuted_blocks=1 00:07:35.762 00:07:35.762 ' 00:07:35.762 17:48:54 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.722 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.722 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.722 17:48:55 nvme -- nvme/nvme.sh@79 -- # uname 00:07:36.722 17:48:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:36.722 17:48:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:36.722 17:48:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1071 -- # stubpid=62744 00:07:36.722 Waiting for stub to ready for secondary processes... 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/62744 ]] 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:36.722 17:48:55 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:36.981 [2024-10-25 17:48:55.175623] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:07:36.981 [2024-10-25 17:48:55.175739] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:37.546 [2024-10-25 17:48:55.909395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.805 [2024-10-25 17:48:56.003621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.805 [2024-10-25 17:48:56.004016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.805 [2024-10-25 17:48:56.004041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.805 [2024-10-25 17:48:56.017258] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:37.805 [2024-10-25 17:48:56.017292] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.805 [2024-10-25 17:48:56.029163] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:37.805 [2024-10-25 17:48:56.029240] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:37.805 [2024-10-25 17:48:56.031320] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.805 [2024-10-25 17:48:56.031750] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:37.805 [2024-10-25 17:48:56.031903] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:37.805 [2024-10-25 17:48:56.036264] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.805 [2024-10-25 17:48:56.036606] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:37.805 [2024-10-25 17:48:56.036731] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:37.805 [2024-10-25 17:48:56.041249] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.805 [2024-10-25 17:48:56.041604] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:37.805 [2024-10-25 17:48:56.041747] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:37.805 [2024-10-25 17:48:56.041858] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:37.805 [2024-10-25 17:48:56.041960] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:37.805 done. 00:07:37.805 17:48:56 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:37.805 17:48:56 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:37.805 17:48:56 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:37.805 17:48:56 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:37.805 17:48:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.805 17:48:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.805 ************************************ 00:07:37.805 START TEST nvme_reset 00:07:37.805 ************************************ 00:07:37.805 17:48:56 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:38.063 Initializing NVMe Controllers 00:07:38.063 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:38.063 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:38.063 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:38.063 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:38.063 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:38.063 00:07:38.063 real 0m0.208s 00:07:38.063 user 0m0.073s 00:07:38.063 sys 0m0.091s 00:07:38.063 17:48:56 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.063 ************************************ 00:07:38.063 END TEST nvme_reset 00:07:38.063 ************************************ 00:07:38.063 17:48:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:38.063 17:48:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:38.063 17:48:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.063 17:48:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.063 17:48:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.063 ************************************ 00:07:38.063 START TEST nvme_identify 00:07:38.063 ************************************ 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:38.063 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:38.063 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:38.063 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:38.063 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # bdfs=() 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # local bdfs 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:07:38.063 17:48:56 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:38.063 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:38.323 [2024-10-25 17:48:56.625455] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62766 terminated unexpected 00:07:38.323 ===================================================== 00:07:38.323 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:38.323 ===================================================== 00:07:38.323 Controller Capabilities/Features 00:07:38.323 ================================ 00:07:38.323 Vendor ID: 1b36 00:07:38.323 Subsystem Vendor ID: 1af4 00:07:38.323 Serial Number: 12340 00:07:38.323 Model Number: QEMU NVMe Ctrl 00:07:38.323 Firmware Version: 8.0.0 00:07:38.323 Recommended Arb Burst: 6 00:07:38.323 IEEE OUI Identifier: 00 54 52 00:07:38.323 Multi-path I/O 00:07:38.323 May have multiple subsystem ports: No 00:07:38.323 May have multiple controllers: No 00:07:38.323 Associated with SR-IOV VF: No 00:07:38.323 Max Data Transfer Size: 524288 00:07:38.323 Max Number of Namespaces: 256 00:07:38.323 Max Number of I/O Queues: 64 00:07:38.323 NVMe Specification Version (VS): 1.4 00:07:38.323 NVMe Specification Version (Identify): 1.4 00:07:38.323 Maximum Queue Entries: 2048 00:07:38.323 Contiguous Queues Required: Yes 00:07:38.323 Arbitration Mechanisms Supported 00:07:38.323 Weighted Round Robin: Not Supported 00:07:38.323 Vendor Specific: Not Supported 00:07:38.323 Reset Timeout: 7500 ms 00:07:38.323 Doorbell Stride: 4 bytes 00:07:38.323 NVM Subsystem Reset: Not Supported 00:07:38.323 Command Sets Supported 00:07:38.323 NVM Command Set: Supported 00:07:38.323 Boot Partition: Not Supported 00:07:38.323 Memory Page Size Minimum: 4096 bytes 00:07:38.323 Memory Page Size Maximum: 65536 bytes 00:07:38.323 Persistent Memory Region: Not Supported 00:07:38.323 Optional Asynchronous Events Supported 00:07:38.323 Namespace Attribute Notices: Supported 00:07:38.323 Firmware Activation Notices: Not Supported 00:07:38.323 ANA Change Notices: Not Supported 00:07:38.323 PLE Aggregate Log Change Notices: Not Supported 00:07:38.323 LBA Status Info Alert Notices: Not Supported 00:07:38.323 EGE Aggregate Log Change Notices: Not Supported 00:07:38.323 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.323 Zone Descriptor Change Notices: Not Supported 00:07:38.323 Discovery Log Change Notices: Not Supported 00:07:38.323 Controller Attributes 00:07:38.323 128-bit Host Identifier: Not Supported 00:07:38.323 Non-Operational Permissive Mode: Not Supported 00:07:38.323 NVM Sets: Not Supported 00:07:38.323 Read Recovery Levels: Not Supported 00:07:38.323 Endurance Groups: Not Supported 00:07:38.323 Predictable Latency Mode: Not Supported 00:07:38.323 Traffic Based Keep ALive: Not Supported 00:07:38.323 Namespace Granularity: Not Supported 00:07:38.323 SQ Associations: Not Supported 00:07:38.323 UUID List: Not Supported 00:07:38.323 Multi-Domain Subsystem: Not Supported 00:07:38.323 Fixed Capacity Management: Not Supported 00:07:38.323 Variable Capacity Management: Not Supported 00:07:38.323 Delete Endurance Group: Not Supported 00:07:38.323 Delete NVM Set: Not Supported 00:07:38.323 Extended LBA Formats Supported: Supported 00:07:38.323 Flexible Data Placement Supported: Not Supported 00:07:38.323 00:07:38.323 Controller Memory Buffer Support 00:07:38.323 ================================ 00:07:38.323 Supported: No 00:07:38.323 00:07:38.323 Persistent Memory Region Support 00:07:38.323 ================================ 00:07:38.323 Supported: No 00:07:38.323 00:07:38.323 Admin Command Set Attributes 00:07:38.323 ============================ 00:07:38.323 Security Send/Receive: Not Supported 00:07:38.323 Format NVM: Supported 00:07:38.323 Firmware Activate/Download: Not Supported 00:07:38.323 Namespace Management: Supported 00:07:38.323 Device Self-Test: Not Supported 00:07:38.323 Directives: Supported 00:07:38.323 NVMe-MI: Not Supported 00:07:38.323 Virtualization Management: Not Supported 00:07:38.323 Doorbell Buffer Config: Supported 00:07:38.323 Get LBA Status Capability: Not Supported 00:07:38.323 Command & Feature Lockdown Capability: Not Supported 00:07:38.323 Abort Command Limit: 4 00:07:38.323 Async Event Request Limit: 4 00:07:38.323 Number of Firmware Slots: N/A 00:07:38.323 Firmware Slot 1 Read-Only: N/A 00:07:38.323 Firmware Activation Without Reset: N/A 00:07:38.323 Multiple Update Detection Support: N/A 00:07:38.323 Firmware Update Granularity: No Information Provided 00:07:38.323 Per-Namespace SMART Log: Yes 00:07:38.323 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.323 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:38.323 Command Effects Log Page: Supported 00:07:38.323 Get Log Page Extended Data: Supported 00:07:38.323 Telemetry Log Pages: Not Supported 00:07:38.323 Persistent Event Log Pages: Not Supported 00:07:38.323 Supported Log Pages Log Page: May Support 00:07:38.323 Commands Supported & Effects Log Page: Not Supported 00:07:38.323 Feature Identifiers & Effects Log Page:May Support 00:07:38.323 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.323 Data Area 4 for Telemetry Log: Not Supported 00:07:38.323 Error Log Page Entries Supported: 1 00:07:38.323 Keep Alive: Not Supported 00:07:38.323 00:07:38.323 NVM Command Set Attributes 00:07:38.323 ========================== 00:07:38.323 Submission Queue Entry Size 00:07:38.323 Max: 64 00:07:38.323 Min: 64 00:07:38.323 Completion Queue Entry Size 00:07:38.324 Max: 16 00:07:38.324 Min: 16 00:07:38.324 Number of Namespaces: 256 00:07:38.324 Compare Command: Supported 00:07:38.324 Write Uncorrectable Command: Not Supported 00:07:38.324 Dataset Management Command: Supported 00:07:38.324 Write Zeroes Command: Supported 00:07:38.324 Set Features Save Field: Supported 00:07:38.324 Reservations: Not Supported 00:07:38.324 Timestamp: Supported 00:07:38.324 Copy: Supported 00:07:38.324 Volatile Write Cache: Present 00:07:38.324 Atomic Write Unit (Normal): 1 00:07:38.324 Atomic Write Unit (PFail): 1 00:07:38.324 Atomic Compare & Write Unit: 1 00:07:38.324 Fused Compare & Write: Not Supported 00:07:38.324 Scatter-Gather List 00:07:38.324 SGL Command Set: Supported 00:07:38.324 SGL Keyed: Not Supported 00:07:38.324 SGL Bit Bucket Descriptor: Not Supported 00:07:38.324 SGL Metadata Pointer: Not Supported 00:07:38.324 Oversized SGL: Not Supported 00:07:38.324 SGL Metadata Address: Not Supported 00:07:38.324 SGL Offset: Not Supported 00:07:38.324 Transport SGL Data Block: Not Supported 00:07:38.324 Replay Protected Memory Block: Not Supported 00:07:38.324 00:07:38.324 Firmware Slot Information 00:07:38.324 ========================= 00:07:38.324 Active slot: 1 00:07:38.324 Slot 1 Firmware Revision: 1.0 00:07:38.324 00:07:38.324 00:07:38.324 Commands Supported and Effects 00:07:38.324 ============================== 00:07:38.324 Admin Commands 00:07:38.324 -------------- 00:07:38.324 Delete I/O Submission Queue (00h): Supported 00:07:38.324 Create I/O Submission Queue (01h): Supported 00:07:38.324 Get Log Page (02h): Supported 00:07:38.324 Delete I/O Completion Queue (04h): Supported 00:07:38.324 Create I/O Completion Queue (05h): Supported 00:07:38.324 Identify (06h): Supported 00:07:38.324 Abort (08h): Supported 00:07:38.324 Set Features (09h): Supported 00:07:38.324 Get Features (0Ah): Supported 00:07:38.324 Asynchronous Event Request (0Ch): Supported 00:07:38.324 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.324 Directive Send (19h): Supported 00:07:38.324 Directive Receive (1Ah): Supported 00:07:38.324 Virtualization Management (1Ch): Supported 00:07:38.324 Doorbell Buffer Config (7Ch): Supported 00:07:38.324 Format NVM (80h): Supported LBA-Change 00:07:38.324 I/O Commands 00:07:38.324 ------------ 00:07:38.324 Flush (00h): Supported LBA-Change 00:07:38.324 Write (01h): Supported LBA-Change 00:07:38.324 Read (02h): Supported 00:07:38.324 Compare (05h): Supported 00:07:38.324 Write Zeroes (08h): Supported LBA-Change 00:07:38.324 Dataset Management (09h): Supported LBA-Change 00:07:38.324 Unknown (0Ch): Supported 00:07:38.324 Unknown (12h): Supported 00:07:38.324 Copy (19h): Supported LBA-Change 00:07:38.324 Unknown (1Dh): Supported LBA-Change 00:07:38.324 00:07:38.324 Error Log 00:07:38.324 ========= 00:07:38.324 00:07:38.324 Arbitration 00:07:38.324 =========== 00:07:38.324 Arbitration Burst: no limit 00:07:38.324 00:07:38.324 Power Management 00:07:38.324 ================ 00:07:38.324 Number of Power States: 1 00:07:38.324 Current Power State: Power State #0 00:07:38.324 Power State #0: 00:07:38.324 Max Power: 25.00 W 00:07:38.324 Non-Operational State: Operational 00:07:38.324 Entry Latency: 16 microseconds 00:07:38.324 Exit Latency: 4 microseconds 00:07:38.324 Relative Read Throughput: 0 00:07:38.324 Relative Read Latency: 0 00:07:38.324 Relative Write Throughput: 0 00:07:38.324 Relative Write Latency: 0 00:07:38.324 Idle Power[2024-10-25 17:48:56.626659] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62766 terminated unexpected 00:07:38.324 : Not Reported 00:07:38.324 Active Power: Not Reported 00:07:38.324 Non-Operational Permissive Mode: Not Supported 00:07:38.324 00:07:38.324 Health Information 00:07:38.324 ================== 00:07:38.324 Critical Warnings: 00:07:38.324 Available Spare Space: OK 00:07:38.324 Temperature: OK 00:07:38.324 Device Reliability: OK 00:07:38.324 Read Only: No 00:07:38.324 Volatile Memory Backup: OK 00:07:38.324 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.324 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.324 Available Spare: 0% 00:07:38.324 Available Spare Threshold: 0% 00:07:38.324 Life Percentage Used: 0% 00:07:38.324 Data Units Read: 652 00:07:38.324 Data Units Written: 581 00:07:38.324 Host Read Commands: 36856 00:07:38.324 Host Write Commands: 36642 00:07:38.324 Controller Busy Time: 0 minutes 00:07:38.324 Power Cycles: 0 00:07:38.324 Power On Hours: 0 hours 00:07:38.324 Unsafe Shutdowns: 0 00:07:38.324 Unrecoverable Media Errors: 0 00:07:38.324 Lifetime Error Log Entries: 0 00:07:38.324 Warning Temperature Time: 0 minutes 00:07:38.324 Critical Temperature Time: 0 minutes 00:07:38.324 00:07:38.324 Number of Queues 00:07:38.324 ================ 00:07:38.324 Number of I/O Submission Queues: 64 00:07:38.324 Number of I/O Completion Queues: 64 00:07:38.324 00:07:38.324 ZNS Specific Controller Data 00:07:38.324 ============================ 00:07:38.324 Zone Append Size Limit: 0 00:07:38.324 00:07:38.324 00:07:38.324 Active Namespaces 00:07:38.324 ================= 00:07:38.324 Namespace ID:1 00:07:38.324 Error Recovery Timeout: Unlimited 00:07:38.324 Command Set Identifier: NVM (00h) 00:07:38.324 Deallocate: Supported 00:07:38.324 Deallocated/Unwritten Error: Supported 00:07:38.324 Deallocated Read Value: All 0x00 00:07:38.324 Deallocate in Write Zeroes: Not Supported 00:07:38.324 Deallocated Guard Field: 0xFFFF 00:07:38.324 Flush: Supported 00:07:38.324 Reservation: Not Supported 00:07:38.324 Metadata Transferred as: Separate Metadata Buffer 00:07:38.324 Namespace Sharing Capabilities: Private 00:07:38.324 Size (in LBAs): 1548666 (5GiB) 00:07:38.324 Capacity (in LBAs): 1548666 (5GiB) 00:07:38.324 Utilization (in LBAs): 1548666 (5GiB) 00:07:38.324 Thin Provisioning: Not Supported 00:07:38.324 Per-NS Atomic Units: No 00:07:38.324 Maximum Single Source Range Length: 128 00:07:38.324 Maximum Copy Length: 128 00:07:38.324 Maximum Source Range Count: 128 00:07:38.324 NGUID/EUI64 Never Reused: No 00:07:38.324 Namespace Write Protected: No 00:07:38.324 Number of LBA Formats: 8 00:07:38.324 Current LBA Format: LBA Format #07 00:07:38.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.324 00:07:38.324 NVM Specific Namespace Data 00:07:38.324 =========================== 00:07:38.324 Logical Block Storage Tag Mask: 0 00:07:38.324 Protection Information Capabilities: 00:07:38.324 16b Guard Protection Information Storage Tag Support: No 00:07:38.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.324 Storage Tag Check Read Support: No 00:07:38.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.324 ===================================================== 00:07:38.324 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:38.324 ===================================================== 00:07:38.324 Controller Capabilities/Features 00:07:38.324 ================================ 00:07:38.324 Vendor ID: 1b36 00:07:38.324 Subsystem Vendor ID: 1af4 00:07:38.324 Serial Number: 12341 00:07:38.324 Model Number: QEMU NVMe Ctrl 00:07:38.324 Firmware Version: 8.0.0 00:07:38.324 Recommended Arb Burst: 6 00:07:38.324 IEEE OUI Identifier: 00 54 52 00:07:38.324 Multi-path I/O 00:07:38.324 May have multiple subsystem ports: No 00:07:38.324 May have multiple controllers: No 00:07:38.324 Associated with SR-IOV VF: No 00:07:38.324 Max Data Transfer Size: 524288 00:07:38.324 Max Number of Namespaces: 256 00:07:38.324 Max Number of I/O Queues: 64 00:07:38.324 NVMe Specification Version (VS): 1.4 00:07:38.324 NVMe Specification Version (Identify): 1.4 00:07:38.324 Maximum Queue Entries: 2048 00:07:38.324 Contiguous Queues Required: Yes 00:07:38.324 Arbitration Mechanisms Supported 00:07:38.324 Weighted Round Robin: Not Supported 00:07:38.324 Vendor Specific: Not Supported 00:07:38.324 Reset Timeout: 7500 ms 00:07:38.324 Doorbell Stride: 4 bytes 00:07:38.324 NVM Subsystem Reset: Not Supported 00:07:38.324 Command Sets Supported 00:07:38.324 NVM Command Set: Supported 00:07:38.324 Boot Partition: Not Supported 00:07:38.324 Memory Page Size Minimum: 4096 bytes 00:07:38.324 Memory Page Size Maximum: 65536 bytes 00:07:38.324 Persistent Memory Region: Not Supported 00:07:38.325 Optional Asynchronous Events Supported 00:07:38.325 Namespace Attribute Notices: Supported 00:07:38.325 Firmware Activation Notices: Not Supported 00:07:38.325 ANA Change Notices: Not Supported 00:07:38.325 PLE Aggregate Log Change Notices: Not Supported 00:07:38.325 LBA Status Info Alert Notices: Not Supported 00:07:38.325 EGE Aggregate Log Change Notices: Not Supported 00:07:38.325 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.325 Zone Descriptor Change Notices: Not Supported 00:07:38.325 Discovery Log Change Notices: Not Supported 00:07:38.325 Controller Attributes 00:07:38.325 128-bit Host Identifier: Not Supported 00:07:38.325 Non-Operational Permissive Mode: Not Supported 00:07:38.325 NVM Sets: Not Supported 00:07:38.325 Read Recovery Levels: Not Supported 00:07:38.325 Endurance Groups: Not Supported 00:07:38.325 Predictable Latency Mode: Not Supported 00:07:38.325 Traffic Based Keep ALive: Not Supported 00:07:38.325 Namespace Granularity: Not Supported 00:07:38.325 SQ Associations: Not Supported 00:07:38.325 UUID List: Not Supported 00:07:38.325 Multi-Domain Subsystem: Not Supported 00:07:38.325 Fixed Capacity Management: Not Supported 00:07:38.325 Variable Capacity Management: Not Supported 00:07:38.325 Delete Endurance Group: Not Supported 00:07:38.325 Delete NVM Set: Not Supported 00:07:38.325 Extended LBA Formats Supported: Supported 00:07:38.325 Flexible Data Placement Supported: Not Supported 00:07:38.325 00:07:38.325 Controller Memory Buffer Support 00:07:38.325 ================================ 00:07:38.325 Supported: No 00:07:38.325 00:07:38.325 Persistent Memory Region Support 00:07:38.325 ================================ 00:07:38.325 Supported: No 00:07:38.325 00:07:38.325 Admin Command Set Attributes 00:07:38.325 ============================ 00:07:38.325 Security Send/Receive: Not Supported 00:07:38.325 Format NVM: Supported 00:07:38.325 Firmware Activate/Download: Not Supported 00:07:38.325 Namespace Management: Supported 00:07:38.325 Device Self-Test: Not Supported 00:07:38.325 Directives: Supported 00:07:38.325 NVMe-MI: Not Supported 00:07:38.325 Virtualization Management: Not Supported 00:07:38.325 Doorbell Buffer Config: Supported 00:07:38.325 Get LBA Status Capability: Not Supported 00:07:38.325 Command & Feature Lockdown Capability: Not Supported 00:07:38.325 Abort Command Limit: 4 00:07:38.325 Async Event Request Limit: 4 00:07:38.325 Number of Firmware Slots: N/A 00:07:38.325 Firmware Slot 1 Read-Only: N/A 00:07:38.325 Firmware Activation Without Reset: N/A 00:07:38.325 Multiple Update Detection Support: N/A 00:07:38.325 Firmware Update Granularity: No Information Provided 00:07:38.325 Per-Namespace SMART Log: Yes 00:07:38.325 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.325 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:38.325 Command Effects Log Page: Supported 00:07:38.325 Get Log Page Extended Data: Supported 00:07:38.325 Telemetry Log Pages: Not Supported 00:07:38.325 Persistent Event Log Pages: Not Supported 00:07:38.325 Supported Log Pages Log Page: May Support 00:07:38.325 Commands Supported & Effects Log Page: Not Supported 00:07:38.325 Feature Identifiers & Effects Log Page:May Support 00:07:38.325 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.325 Data Area 4 for Telemetry Log: Not Supported 00:07:38.325 Error Log Page Entries Supported: 1 00:07:38.325 Keep Alive: Not Supported 00:07:38.325 00:07:38.325 NVM Command Set Attributes 00:07:38.325 ========================== 00:07:38.325 Submission Queue Entry Size 00:07:38.325 Max: 64 00:07:38.325 Min: 64 00:07:38.325 Completion Queue Entry Size 00:07:38.325 Max: 16 00:07:38.325 Min: 16 00:07:38.325 Number of Namespaces: 256 00:07:38.325 Compare Command: Supported 00:07:38.325 Write Uncorrectable Command: Not Supported 00:07:38.325 Dataset Management Command: Supported 00:07:38.325 Write Zeroes Command: Supported 00:07:38.325 Set Features Save Field: Supported 00:07:38.325 Reservations: Not Supported 00:07:38.325 Timestamp: Supported 00:07:38.325 Copy: Supported 00:07:38.325 Volatile Write Cache: Present 00:07:38.325 Atomic Write Unit (Normal): 1 00:07:38.325 Atomic Write Unit (PFail): 1 00:07:38.325 Atomic Compare & Write Unit: 1 00:07:38.325 Fused Compare & Write: Not Supported 00:07:38.325 Scatter-Gather List 00:07:38.325 SGL Command Set: Supported 00:07:38.325 SGL Keyed: Not Supported 00:07:38.325 SGL Bit Bucket Descriptor: Not Supported 00:07:38.325 SGL Metadata Pointer: Not Supported 00:07:38.325 Oversized SGL: Not Supported 00:07:38.325 SGL Metadata Address: Not Supported 00:07:38.325 SGL Offset: Not Supported 00:07:38.325 Transport SGL Data Block: Not Supported 00:07:38.325 Replay Protected Memory Block: Not Supported 00:07:38.325 00:07:38.325 Firmware Slot Information 00:07:38.325 ========================= 00:07:38.325 Active slot: 1 00:07:38.325 Slot 1 Firmware Revision: 1.0 00:07:38.325 00:07:38.325 00:07:38.325 Commands Supported and Effects 00:07:38.325 ============================== 00:07:38.325 Admin Commands 00:07:38.325 -------------- 00:07:38.325 Delete I/O Submission Queue (00h): Supported 00:07:38.325 Create I/O Submission Queue (01h): Supported 00:07:38.325 Get Log Page (02h): Supported 00:07:38.325 Delete I/O Completion Queue (04h): Supported 00:07:38.325 Create I/O Completion Queue (05h): Supported 00:07:38.325 Identify (06h): Supported 00:07:38.325 Abort (08h): Supported 00:07:38.325 Set Features (09h): Supported 00:07:38.325 Get Features (0Ah): Supported 00:07:38.325 Asynchronous Event Request (0Ch): Supported 00:07:38.325 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.325 Directive Send (19h): Supported 00:07:38.325 Directive Receive (1Ah): Supported 00:07:38.325 Virtualization Management (1Ch): Supported 00:07:38.325 Doorbell Buffer Config (7Ch): Supported 00:07:38.325 Format NVM (80h): Supported LBA-Change 00:07:38.325 I/O Commands 00:07:38.325 ------------ 00:07:38.325 Flush (00h): Supported LBA-Change 00:07:38.325 Write (01h): Supported LBA-Change 00:07:38.325 Read (02h): Supported 00:07:38.325 Compare (05h): Supported 00:07:38.325 Write Zeroes (08h): Supported LBA-Change 00:07:38.325 Dataset Management (09h): Supported LBA-Change 00:07:38.325 Unknown (0Ch): Supported 00:07:38.325 Unknown (12h): Supported 00:07:38.325 Copy (19h): Supported LBA-Change 00:07:38.325 Unknown (1Dh): Supported LBA-Change 00:07:38.325 00:07:38.325 Error Log 00:07:38.325 ========= 00:07:38.325 00:07:38.325 Arbitration 00:07:38.325 =========== 00:07:38.325 Arbitration Burst: no limit 00:07:38.325 00:07:38.325 Power Management 00:07:38.325 ================ 00:07:38.325 Number of Power States: 1 00:07:38.325 Current Power State: Power State #0 00:07:38.325 Power State #0: 00:07:38.325 Max Power: 25.00 W 00:07:38.325 Non-Operational State: Operational 00:07:38.325 Entry Latency: 16 microseconds 00:07:38.325 Exit Latency: 4 microseconds 00:07:38.325 Relative Read Throughput: 0 00:07:38.325 Relative Read Latency: 0 00:07:38.325 Relative Write Throughput: 0 00:07:38.325 Relative Write Latency: 0 00:07:38.325 Idle Power: Not Reported 00:07:38.325 Active Power: Not Reported 00:07:38.325 Non-Operational Permissive Mode: Not Supported 00:07:38.325 00:07:38.325 Health Information 00:07:38.325 ================== 00:07:38.325 Critical Warnings: 00:07:38.325 Available Spare Space: OK 00:07:38.325 Temperature: [2024-10-25 17:48:56.627455] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62766 terminated unexpected 00:07:38.325 OK 00:07:38.325 Device Reliability: OK 00:07:38.325 Read Only: No 00:07:38.325 Volatile Memory Backup: OK 00:07:38.325 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.325 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.325 Available Spare: 0% 00:07:38.325 Available Spare Threshold: 0% 00:07:38.325 Life Percentage Used: 0% 00:07:38.325 Data Units Read: 1038 00:07:38.325 Data Units Written: 911 00:07:38.325 Host Read Commands: 56769 00:07:38.325 Host Write Commands: 55666 00:07:38.325 Controller Busy Time: 0 minutes 00:07:38.325 Power Cycles: 0 00:07:38.325 Power On Hours: 0 hours 00:07:38.325 Unsafe Shutdowns: 0 00:07:38.325 Unrecoverable Media Errors: 0 00:07:38.325 Lifetime Error Log Entries: 0 00:07:38.325 Warning Temperature Time: 0 minutes 00:07:38.325 Critical Temperature Time: 0 minutes 00:07:38.325 00:07:38.325 Number of Queues 00:07:38.325 ================ 00:07:38.325 Number of I/O Submission Queues: 64 00:07:38.325 Number of I/O Completion Queues: 64 00:07:38.325 00:07:38.325 ZNS Specific Controller Data 00:07:38.325 ============================ 00:07:38.325 Zone Append Size Limit: 0 00:07:38.325 00:07:38.325 00:07:38.325 Active Namespaces 00:07:38.325 ================= 00:07:38.325 Namespace ID:1 00:07:38.325 Error Recovery Timeout: Unlimited 00:07:38.325 Command Set Identifier: NVM (00h) 00:07:38.325 Deallocate: Supported 00:07:38.325 Deallocated/Unwritten Error: Supported 00:07:38.325 Deallocated Read Value: All 0x00 00:07:38.325 Deallocate in Write Zeroes: Not Supported 00:07:38.326 Deallocated Guard Field: 0xFFFF 00:07:38.326 Flush: Supported 00:07:38.326 Reservation: Not Supported 00:07:38.326 Namespace Sharing Capabilities: Private 00:07:38.326 Size (in LBAs): 1310720 (5GiB) 00:07:38.326 Capacity (in LBAs): 1310720 (5GiB) 00:07:38.326 Utilization (in LBAs): 1310720 (5GiB) 00:07:38.326 Thin Provisioning: Not Supported 00:07:38.326 Per-NS Atomic Units: No 00:07:38.326 Maximum Single Source Range Length: 128 00:07:38.326 Maximum Copy Length: 128 00:07:38.326 Maximum Source Range Count: 128 00:07:38.326 NGUID/EUI64 Never Reused: No 00:07:38.326 Namespace Write Protected: No 00:07:38.326 Number of LBA Formats: 8 00:07:38.326 Current LBA Format: LBA Format #04 00:07:38.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.326 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.326 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.326 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.326 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.326 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.326 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.326 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.326 00:07:38.326 NVM Specific Namespace Data 00:07:38.326 =========================== 00:07:38.326 Logical Block Storage Tag Mask: 0 00:07:38.326 Protection Information Capabilities: 00:07:38.326 16b Guard Protection Information Storage Tag Support: No 00:07:38.326 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.326 Storage Tag Check Read Support: No 00:07:38.326 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.326 ===================================================== 00:07:38.326 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:38.326 ===================================================== 00:07:38.326 Controller Capabilities/Features 00:07:38.326 ================================ 00:07:38.326 Vendor ID: 1b36 00:07:38.326 Subsystem Vendor ID: 1af4 00:07:38.326 Serial Number: 12343 00:07:38.326 Model Number: QEMU NVMe Ctrl 00:07:38.326 Firmware Version: 8.0.0 00:07:38.326 Recommended Arb Burst: 6 00:07:38.326 IEEE OUI Identifier: 00 54 52 00:07:38.326 Multi-path I/O 00:07:38.326 May have multiple subsystem ports: No 00:07:38.326 May have multiple controllers: Yes 00:07:38.326 Associated with SR-IOV VF: No 00:07:38.326 Max Data Transfer Size: 524288 00:07:38.326 Max Number of Namespaces: 256 00:07:38.326 Max Number of I/O Queues: 64 00:07:38.326 NVMe Specification Version (VS): 1.4 00:07:38.326 NVMe Specification Version (Identify): 1.4 00:07:38.326 Maximum Queue Entries: 2048 00:07:38.326 Contiguous Queues Required: Yes 00:07:38.326 Arbitration Mechanisms Supported 00:07:38.326 Weighted Round Robin: Not Supported 00:07:38.326 Vendor Specific: Not Supported 00:07:38.326 Reset Timeout: 7500 ms 00:07:38.326 Doorbell Stride: 4 bytes 00:07:38.326 NVM Subsystem Reset: Not Supported 00:07:38.326 Command Sets Supported 00:07:38.326 NVM Command Set: Supported 00:07:38.326 Boot Partition: Not Supported 00:07:38.326 Memory Page Size Minimum: 4096 bytes 00:07:38.326 Memory Page Size Maximum: 65536 bytes 00:07:38.326 Persistent Memory Region: Not Supported 00:07:38.326 Optional Asynchronous Events Supported 00:07:38.326 Namespace Attribute Notices: Supported 00:07:38.326 Firmware Activation Notices: Not Supported 00:07:38.326 ANA Change Notices: Not Supported 00:07:38.326 PLE Aggregate Log Change Notices: Not Supported 00:07:38.326 LBA Status Info Alert Notices: Not Supported 00:07:38.326 EGE Aggregate Log Change Notices: Not Supported 00:07:38.326 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.326 Zone Descriptor Change Notices: Not Supported 00:07:38.326 Discovery Log Change Notices: Not Supported 00:07:38.326 Controller Attributes 00:07:38.326 128-bit Host Identifier: Not Supported 00:07:38.326 Non-Operational Permissive Mode: Not Supported 00:07:38.326 NVM Sets: Not Supported 00:07:38.326 Read Recovery Levels: Not Supported 00:07:38.326 Endurance Groups: Supported 00:07:38.326 Predictable Latency Mode: Not Supported 00:07:38.326 Traffic Based Keep ALive: Not Supported 00:07:38.326 Namespace Granularity: Not Supported 00:07:38.326 SQ Associations: Not Supported 00:07:38.326 UUID List: Not Supported 00:07:38.326 Multi-Domain Subsystem: Not Supported 00:07:38.326 Fixed Capacity Management: Not Supported 00:07:38.326 Variable Capacity Management: Not Supported 00:07:38.326 Delete Endurance Group: Not Supported 00:07:38.326 Delete NVM Set: Not Supported 00:07:38.326 Extended LBA Formats Supported: Supported 00:07:38.326 Flexible Data Placement Supported: Supported 00:07:38.326 00:07:38.326 Controller Memory Buffer Support 00:07:38.326 ================================ 00:07:38.326 Supported: No 00:07:38.326 00:07:38.326 Persistent Memory Region Support 00:07:38.326 ================================ 00:07:38.326 Supported: No 00:07:38.326 00:07:38.326 Admin Command Set Attributes 00:07:38.326 ============================ 00:07:38.326 Security Send/Receive: Not Supported 00:07:38.326 Format NVM: Supported 00:07:38.326 Firmware Activate/Download: Not Supported 00:07:38.326 Namespace Management: Supported 00:07:38.326 Device Self-Test: Not Supported 00:07:38.326 Directives: Supported 00:07:38.326 NVMe-MI: Not Supported 00:07:38.326 Virtualization Management: Not Supported 00:07:38.326 Doorbell Buffer Config: Supported 00:07:38.326 Get LBA Status Capability: Not Supported 00:07:38.326 Command & Feature Lockdown Capability: Not Supported 00:07:38.326 Abort Command Limit: 4 00:07:38.326 Async Event Request Limit: 4 00:07:38.326 Number of Firmware Slots: N/A 00:07:38.326 Firmware Slot 1 Read-Only: N/A 00:07:38.326 Firmware Activation Without Reset: N/A 00:07:38.326 Multiple Update Detection Support: N/A 00:07:38.326 Firmware Update Granularity: No Information Provided 00:07:38.326 Per-Namespace SMART Log: Yes 00:07:38.326 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.326 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:38.326 Command Effects Log Page: Supported 00:07:38.326 Get Log Page Extended Data: Supported 00:07:38.326 Telemetry Log Pages: Not Supported 00:07:38.326 Persistent Event Log Pages: Not Supported 00:07:38.326 Supported Log Pages Log Page: May Support 00:07:38.326 Commands Supported & Effects Log Page: Not Supported 00:07:38.326 Feature Identifiers & Effects Log Page:May Support 00:07:38.326 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.326 Data Area 4 for Telemetry Log: Not Supported 00:07:38.326 Error Log Page Entries Supported: 1 00:07:38.326 Keep Alive: Not Supported 00:07:38.326 00:07:38.326 NVM Command Set Attributes 00:07:38.326 ========================== 00:07:38.326 Submission Queue Entry Size 00:07:38.326 Max: 64 00:07:38.326 Min: 64 00:07:38.326 Completion Queue Entry Size 00:07:38.326 Max: 16 00:07:38.326 Min: 16 00:07:38.326 Number of Namespaces: 256 00:07:38.326 Compare Command: Supported 00:07:38.326 Write Uncorrectable Command: Not Supported 00:07:38.326 Dataset Management Command: Supported 00:07:38.326 Write Zeroes Command: Supported 00:07:38.326 Set Features Save Field: Supported 00:07:38.326 Reservations: Not Supported 00:07:38.326 Timestamp: Supported 00:07:38.326 Copy: Supported 00:07:38.326 Volatile Write Cache: Present 00:07:38.326 Atomic Write Unit (Normal): 1 00:07:38.326 Atomic Write Unit (PFail): 1 00:07:38.326 Atomic Compare & Write Unit: 1 00:07:38.326 Fused Compare & Write: Not Supported 00:07:38.326 Scatter-Gather List 00:07:38.326 SGL Command Set: Supported 00:07:38.326 SGL Keyed: Not Supported 00:07:38.326 SGL Bit Bucket Descriptor: Not Supported 00:07:38.326 SGL Metadata Pointer: Not Supported 00:07:38.326 Oversized SGL: Not Supported 00:07:38.326 SGL Metadata Address: Not Supported 00:07:38.326 SGL Offset: Not Supported 00:07:38.326 Transport SGL Data Block: Not Supported 00:07:38.326 Replay Protected Memory Block: Not Supported 00:07:38.326 00:07:38.326 Firmware Slot Information 00:07:38.326 ========================= 00:07:38.326 Active slot: 1 00:07:38.326 Slot 1 Firmware Revision: 1.0 00:07:38.326 00:07:38.326 00:07:38.326 Commands Supported and Effects 00:07:38.326 ============================== 00:07:38.326 Admin Commands 00:07:38.326 -------------- 00:07:38.326 Delete I/O Submission Queue (00h): Supported 00:07:38.326 Create I/O Submission Queue (01h): Supported 00:07:38.326 Get Log Page (02h): Supported 00:07:38.326 Delete I/O Completion Queue (04h): Supported 00:07:38.326 Create I/O Completion Queue (05h): Supported 00:07:38.326 Identify (06h): Supported 00:07:38.326 Abort (08h): Supported 00:07:38.326 Set Features (09h): Supported 00:07:38.326 Get Features (0Ah): Supported 00:07:38.326 Asynchronous Event Request (0Ch): Supported 00:07:38.326 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.326 Directive Send (19h): Supported 00:07:38.327 Directive Receive (1Ah): Supported 00:07:38.327 Virtualization Management (1Ch): Supported 00:07:38.327 Doorbell Buffer Config (7Ch): Supported 00:07:38.327 Format NVM (80h): Supported LBA-Change 00:07:38.327 I/O Commands 00:07:38.327 ------------ 00:07:38.327 Flush (00h): Supported LBA-Change 00:07:38.327 Write (01h): Supported LBA-Change 00:07:38.327 Read (02h): Supported 00:07:38.327 Compare (05h): Supported 00:07:38.327 Write Zeroes (08h): Supported LBA-Change 00:07:38.327 Dataset Management (09h): Supported LBA-Change 00:07:38.327 Unknown (0Ch): Supported 00:07:38.327 Unknown (12h): Supported 00:07:38.327 Copy (19h): Supported LBA-Change 00:07:38.327 Unknown (1Dh): Supported LBA-Change 00:07:38.327 00:07:38.327 Error Log 00:07:38.327 ========= 00:07:38.327 00:07:38.327 Arbitration 00:07:38.327 =========== 00:07:38.327 Arbitration Burst: no limit 00:07:38.327 00:07:38.327 Power Management 00:07:38.327 ================ 00:07:38.327 Number of Power States: 1 00:07:38.327 Current Power State: Power State #0 00:07:38.327 Power State #0: 00:07:38.327 Max Power: 25.00 W 00:07:38.327 Non-Operational State: Operational 00:07:38.327 Entry Latency: 16 microseconds 00:07:38.327 Exit Latency: 4 microseconds 00:07:38.327 Relative Read Throughput: 0 00:07:38.327 Relative Read Latency: 0 00:07:38.327 Relative Write Throughput: 0 00:07:38.327 Relative Write Latency: 0 00:07:38.327 Idle Power: Not Reported 00:07:38.327 Active Power: Not Reported 00:07:38.327 Non-Operational Permissive Mode: Not Supported 00:07:38.327 00:07:38.327 Health Information 00:07:38.327 ================== 00:07:38.327 Critical Warnings: 00:07:38.327 Available Spare Space: OK 00:07:38.327 Temperature: OK 00:07:38.327 Device Reliability: OK 00:07:38.327 Read Only: No 00:07:38.327 Volatile Memory Backup: OK 00:07:38.327 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.327 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.327 Available Spare: 0% 00:07:38.327 Available Spare Threshold: 0% 00:07:38.327 Life Percentage Used: 0% 00:07:38.327 Data Units Read: 908 00:07:38.327 Data Units Written: 837 00:07:38.327 Host Read Commands: 39093 00:07:38.327 Host Write Commands: 38516 00:07:38.327 Controller Busy Time: 0 minutes 00:07:38.327 Power Cycles: 0 00:07:38.327 Power On Hours: 0 hours 00:07:38.327 Unsafe Shutdowns: 0 00:07:38.327 Unrecoverable Media Errors: 0 00:07:38.327 Lifetime Error Log Entries: 0 00:07:38.327 Warning Temperature Time: 0 minutes 00:07:38.327 Critical Temperature Time: 0 minutes 00:07:38.327 00:07:38.327 Number of Queues 00:07:38.327 ================ 00:07:38.327 Number of I/O Submission Queues: 64 00:07:38.327 Number of I/O Completion Queues: 64 00:07:38.327 00:07:38.327 ZNS Specific Controller Data 00:07:38.327 ============================ 00:07:38.327 Zone Append Size Limit: 0 00:07:38.327 00:07:38.327 00:07:38.327 Active Namespaces 00:07:38.327 ================= 00:07:38.327 Namespace ID:1 00:07:38.327 Error Recovery Timeout: Unlimited 00:07:38.327 Command Set Identifier: NVM (00h) 00:07:38.327 Deallocate: Supported 00:07:38.327 Deallocated/Unwritten Error: Supported 00:07:38.327 Deallocated Read Value: All 0x00 00:07:38.327 Deallocate in Write Zeroes: Not Supported 00:07:38.327 Deallocated Guard Field: 0xFFFF 00:07:38.327 Flush: Supported 00:07:38.327 Reservation: Not Supported 00:07:38.327 Namespace Sharing Capabilities: Multiple Controllers 00:07:38.327 Size (in LBAs): 262144 (1GiB) 00:07:38.327 Capacity (in LBAs): 262144 (1GiB) 00:07:38.327 Utilization (in LBAs): 262144 (1GiB) 00:07:38.327 Thin Provisioning: Not Supported 00:07:38.327 Per-NS Atomic Units: No 00:07:38.327 Maximum Single Source Range Length: 128 00:07:38.327 Maximum Copy Length: 128 00:07:38.327 Maximum Source Range Count: 128 00:07:38.327 NGUID/EUI64 Never Reused: No 00:07:38.327 Namespace Write Protected: No 00:07:38.327 Endurance group ID: 1 00:07:38.327 Number of LBA Formats: 8 00:07:38.327 Current LBA Format: LBA Format #04 00:07:38.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.327 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.327 00:07:38.327 Get Feature FDP: 00:07:38.327 ================ 00:07:38.327 Enabled: Yes 00:07:38.327 FDP configuration index: 0 00:07:38.327 00:07:38.327 FDP configurations log page 00:07:38.327 =========================== 00:07:38.327 Number of FDP configurations: 1 00:07:38.327 Version: 0 00:07:38.327 Size: 112 00:07:38.327 FDP Configuration Descriptor: 0 00:07:38.327 Descriptor Size: 96 00:07:38.327 Reclaim Group Identifier format: 2 00:07:38.327 FDP Volatile Write Cache: Not Present 00:07:38.327 FDP Configuration: Valid 00:07:38.327 Vendor Specific Size: 0 00:07:38.327 Number of Reclaim Groups: 2 00:07:38.327 Number of Recalim Unit Handles: 8 00:07:38.327 Max Placement Identifiers: 128 00:07:38.327 Number of Namespaces Suppprted: 256 00:07:38.327 Reclaim unit Nominal Size: 6000000 bytes 00:07:38.327 Estimated Reclaim Unit Time Limit: Not Reported 00:07:38.327 RUH Desc #000: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #001: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #002: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #003: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #004: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #005: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #006: RUH Type: Initially Isolated 00:07:38.327 RUH Desc #007: RUH Type: Initially Isolated 00:07:38.327 00:07:38.327 FDP reclaim unit handle usage log page 00:07:38.327 ====================================== 00:07:38.327 Number of Reclaim Unit Handles: 8 00:07:38.327 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:38.327 RUH Usage Desc #001: RUH Attributes: Unused 00:07:38.327 RUH Usage Desc #002: RUH Attributes: Unused 00:07:38.327 RUH Usage Desc #003: RUH Attributes: Unused 00:07:38.327 RUH Usage Desc #004: RUH Attributes: Unused 00:07:38.327 RUH Usage Desc #005: RUH Attributes: Unused 00:07:38.327 RUH Usage Desc #006: RUH Attributes: Unused 00:07:38.327 RUH Usage Desc #007: RUH Attributes: Unused 00:07:38.327 00:07:38.327 FDP statistics log page 00:07:38.327 ======================= 00:07:38.327 Host bytes with metadata written: 524656640 00:07:38.327 Medi[2024-10-25 17:48:56.628845] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62766 terminated unexpected 00:07:38.327 a bytes with metadata written: 524705792 00:07:38.327 Media bytes erased: 0 00:07:38.327 00:07:38.327 FDP events log page 00:07:38.327 =================== 00:07:38.327 Number of FDP events: 0 00:07:38.327 00:07:38.327 NVM Specific Namespace Data 00:07:38.327 =========================== 00:07:38.327 Logical Block Storage Tag Mask: 0 00:07:38.327 Protection Information Capabilities: 00:07:38.327 16b Guard Protection Information Storage Tag Support: No 00:07:38.327 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.327 Storage Tag Check Read Support: No 00:07:38.327 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.327 ===================================================== 00:07:38.327 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:38.327 ===================================================== 00:07:38.327 Controller Capabilities/Features 00:07:38.327 ================================ 00:07:38.327 Vendor ID: 1b36 00:07:38.327 Subsystem Vendor ID: 1af4 00:07:38.327 Serial Number: 12342 00:07:38.327 Model Number: QEMU NVMe Ctrl 00:07:38.327 Firmware Version: 8.0.0 00:07:38.327 Recommended Arb Burst: 6 00:07:38.327 IEEE OUI Identifier: 00 54 52 00:07:38.327 Multi-path I/O 00:07:38.327 May have multiple subsystem ports: No 00:07:38.327 May have multiple controllers: No 00:07:38.327 Associated with SR-IOV VF: No 00:07:38.327 Max Data Transfer Size: 524288 00:07:38.327 Max Number of Namespaces: 256 00:07:38.327 Max Number of I/O Queues: 64 00:07:38.327 NVMe Specification Version (VS): 1.4 00:07:38.327 NVMe Specification Version (Identify): 1.4 00:07:38.327 Maximum Queue Entries: 2048 00:07:38.327 Contiguous Queues Required: Yes 00:07:38.327 Arbitration Mechanisms Supported 00:07:38.327 Weighted Round Robin: Not Supported 00:07:38.327 Vendor Specific: Not Supported 00:07:38.327 Reset Timeout: 7500 ms 00:07:38.327 Doorbell Stride: 4 bytes 00:07:38.328 NVM Subsystem Reset: Not Supported 00:07:38.328 Command Sets Supported 00:07:38.328 NVM Command Set: Supported 00:07:38.328 Boot Partition: Not Supported 00:07:38.328 Memory Page Size Minimum: 4096 bytes 00:07:38.328 Memory Page Size Maximum: 65536 bytes 00:07:38.328 Persistent Memory Region: Not Supported 00:07:38.328 Optional Asynchronous Events Supported 00:07:38.328 Namespace Attribute Notices: Supported 00:07:38.328 Firmware Activation Notices: Not Supported 00:07:38.328 ANA Change Notices: Not Supported 00:07:38.328 PLE Aggregate Log Change Notices: Not Supported 00:07:38.328 LBA Status Info Alert Notices: Not Supported 00:07:38.328 EGE Aggregate Log Change Notices: Not Supported 00:07:38.328 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.328 Zone Descriptor Change Notices: Not Supported 00:07:38.328 Discovery Log Change Notices: Not Supported 00:07:38.328 Controller Attributes 00:07:38.328 128-bit Host Identifier: Not Supported 00:07:38.328 Non-Operational Permissive Mode: Not Supported 00:07:38.328 NVM Sets: Not Supported 00:07:38.328 Read Recovery Levels: Not Supported 00:07:38.328 Endurance Groups: Not Supported 00:07:38.328 Predictable Latency Mode: Not Supported 00:07:38.328 Traffic Based Keep ALive: Not Supported 00:07:38.328 Namespace Granularity: Not Supported 00:07:38.328 SQ Associations: Not Supported 00:07:38.328 UUID List: Not Supported 00:07:38.328 Multi-Domain Subsystem: Not Supported 00:07:38.328 Fixed Capacity Management: Not Supported 00:07:38.328 Variable Capacity Management: Not Supported 00:07:38.328 Delete Endurance Group: Not Supported 00:07:38.328 Delete NVM Set: Not Supported 00:07:38.328 Extended LBA Formats Supported: Supported 00:07:38.328 Flexible Data Placement Supported: Not Supported 00:07:38.328 00:07:38.328 Controller Memory Buffer Support 00:07:38.328 ================================ 00:07:38.328 Supported: No 00:07:38.328 00:07:38.328 Persistent Memory Region Support 00:07:38.328 ================================ 00:07:38.328 Supported: No 00:07:38.328 00:07:38.328 Admin Command Set Attributes 00:07:38.328 ============================ 00:07:38.328 Security Send/Receive: Not Supported 00:07:38.328 Format NVM: Supported 00:07:38.328 Firmware Activate/Download: Not Supported 00:07:38.328 Namespace Management: Supported 00:07:38.328 Device Self-Test: Not Supported 00:07:38.328 Directives: Supported 00:07:38.328 NVMe-MI: Not Supported 00:07:38.328 Virtualization Management: Not Supported 00:07:38.328 Doorbell Buffer Config: Supported 00:07:38.328 Get LBA Status Capability: Not Supported 00:07:38.328 Command & Feature Lockdown Capability: Not Supported 00:07:38.328 Abort Command Limit: 4 00:07:38.328 Async Event Request Limit: 4 00:07:38.328 Number of Firmware Slots: N/A 00:07:38.328 Firmware Slot 1 Read-Only: N/A 00:07:38.328 Firmware Activation Without Reset: N/A 00:07:38.328 Multiple Update Detection Support: N/A 00:07:38.328 Firmware Update Granularity: No Information Provided 00:07:38.328 Per-Namespace SMART Log: Yes 00:07:38.328 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.328 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:38.328 Command Effects Log Page: Supported 00:07:38.328 Get Log Page Extended Data: Supported 00:07:38.328 Telemetry Log Pages: Not Supported 00:07:38.328 Persistent Event Log Pages: Not Supported 00:07:38.328 Supported Log Pages Log Page: May Support 00:07:38.328 Commands Supported & Effects Log Page: Not Supported 00:07:38.328 Feature Identifiers & Effects Log Page:May Support 00:07:38.328 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.328 Data Area 4 for Telemetry Log: Not Supported 00:07:38.328 Error Log Page Entries Supported: 1 00:07:38.328 Keep Alive: Not Supported 00:07:38.328 00:07:38.328 NVM Command Set Attributes 00:07:38.328 ========================== 00:07:38.328 Submission Queue Entry Size 00:07:38.328 Max: 64 00:07:38.328 Min: 64 00:07:38.328 Completion Queue Entry Size 00:07:38.328 Max: 16 00:07:38.328 Min: 16 00:07:38.328 Number of Namespaces: 256 00:07:38.328 Compare Command: Supported 00:07:38.328 Write Uncorrectable Command: Not Supported 00:07:38.328 Dataset Management Command: Supported 00:07:38.328 Write Zeroes Command: Supported 00:07:38.328 Set Features Save Field: Supported 00:07:38.328 Reservations: Not Supported 00:07:38.328 Timestamp: Supported 00:07:38.328 Copy: Supported 00:07:38.328 Volatile Write Cache: Present 00:07:38.328 Atomic Write Unit (Normal): 1 00:07:38.328 Atomic Write Unit (PFail): 1 00:07:38.328 Atomic Compare & Write Unit: 1 00:07:38.328 Fused Compare & Write: Not Supported 00:07:38.328 Scatter-Gather List 00:07:38.328 SGL Command Set: Supported 00:07:38.328 SGL Keyed: Not Supported 00:07:38.328 SGL Bit Bucket Descriptor: Not Supported 00:07:38.328 SGL Metadata Pointer: Not Supported 00:07:38.328 Oversized SGL: Not Supported 00:07:38.328 SGL Metadata Address: Not Supported 00:07:38.328 SGL Offset: Not Supported 00:07:38.328 Transport SGL Data Block: Not Supported 00:07:38.328 Replay Protected Memory Block: Not Supported 00:07:38.328 00:07:38.328 Firmware Slot Information 00:07:38.328 ========================= 00:07:38.328 Active slot: 1 00:07:38.328 Slot 1 Firmware Revision: 1.0 00:07:38.328 00:07:38.328 00:07:38.328 Commands Supported and Effects 00:07:38.328 ============================== 00:07:38.328 Admin Commands 00:07:38.328 -------------- 00:07:38.328 Delete I/O Submission Queue (00h): Supported 00:07:38.328 Create I/O Submission Queue (01h): Supported 00:07:38.328 Get Log Page (02h): Supported 00:07:38.328 Delete I/O Completion Queue (04h): Supported 00:07:38.328 Create I/O Completion Queue (05h): Supported 00:07:38.328 Identify (06h): Supported 00:07:38.328 Abort (08h): Supported 00:07:38.328 Set Features (09h): Supported 00:07:38.328 Get Features (0Ah): Supported 00:07:38.328 Asynchronous Event Request (0Ch): Supported 00:07:38.328 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.328 Directive Send (19h): Supported 00:07:38.328 Directive Receive (1Ah): Supported 00:07:38.328 Virtualization Management (1Ch): Supported 00:07:38.328 Doorbell Buffer Config (7Ch): Supported 00:07:38.328 Format NVM (80h): Supported LBA-Change 00:07:38.328 I/O Commands 00:07:38.328 ------------ 00:07:38.328 Flush (00h): Supported LBA-Change 00:07:38.328 Write (01h): Supported LBA-Change 00:07:38.328 Read (02h): Supported 00:07:38.328 Compare (05h): Supported 00:07:38.328 Write Zeroes (08h): Supported LBA-Change 00:07:38.328 Dataset Management (09h): Supported LBA-Change 00:07:38.328 Unknown (0Ch): Supported 00:07:38.328 Unknown (12h): Supported 00:07:38.328 Copy (19h): Supported LBA-Change 00:07:38.328 Unknown (1Dh): Supported LBA-Change 00:07:38.328 00:07:38.328 Error Log 00:07:38.328 ========= 00:07:38.328 00:07:38.328 Arbitration 00:07:38.328 =========== 00:07:38.328 Arbitration Burst: no limit 00:07:38.328 00:07:38.328 Power Management 00:07:38.328 ================ 00:07:38.328 Number of Power States: 1 00:07:38.328 Current Power State: Power State #0 00:07:38.328 Power State #0: 00:07:38.328 Max Power: 25.00 W 00:07:38.328 Non-Operational State: Operational 00:07:38.328 Entry Latency: 16 microseconds 00:07:38.328 Exit Latency: 4 microseconds 00:07:38.328 Relative Read Throughput: 0 00:07:38.328 Relative Read Latency: 0 00:07:38.328 Relative Write Throughput: 0 00:07:38.328 Relative Write Latency: 0 00:07:38.328 Idle Power: Not Reported 00:07:38.328 Active Power: Not Reported 00:07:38.328 Non-Operational Permissive Mode: Not Supported 00:07:38.328 00:07:38.328 Health Information 00:07:38.328 ================== 00:07:38.328 Critical Warnings: 00:07:38.328 Available Spare Space: OK 00:07:38.328 Temperature: OK 00:07:38.328 Device Reliability: OK 00:07:38.329 Read Only: No 00:07:38.329 Volatile Memory Backup: OK 00:07:38.329 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.329 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.329 Available Spare: 0% 00:07:38.329 Available Spare Threshold: 0% 00:07:38.329 Life Percentage Used: 0% 00:07:38.329 Data Units Read: 2168 00:07:38.329 Data Units Written: 1955 00:07:38.329 Host Read Commands: 113174 00:07:38.329 Host Write Commands: 111443 00:07:38.329 Controller Busy Time: 0 minutes 00:07:38.329 Power Cycles: 0 00:07:38.329 Power On Hours: 0 hours 00:07:38.329 Unsafe Shutdowns: 0 00:07:38.329 Unrecoverable Media Errors: 0 00:07:38.329 Lifetime Error Log Entries: 0 00:07:38.329 Warning Temperature Time: 0 minutes 00:07:38.329 Critical Temperature Time: 0 minutes 00:07:38.329 00:07:38.329 Number of Queues 00:07:38.329 ================ 00:07:38.329 Number of I/O Submission Queues: 64 00:07:38.329 Number of I/O Completion Queues: 64 00:07:38.329 00:07:38.329 ZNS Specific Controller Data 00:07:38.329 ============================ 00:07:38.329 Zone Append Size Limit: 0 00:07:38.329 00:07:38.329 00:07:38.329 Active Namespaces 00:07:38.329 ================= 00:07:38.329 Namespace ID:1 00:07:38.329 Error Recovery Timeout: Unlimited 00:07:38.329 Command Set Identifier: NVM (00h) 00:07:38.329 Deallocate: Supported 00:07:38.329 Deallocated/Unwritten Error: Supported 00:07:38.329 Deallocated Read Value: All 0x00 00:07:38.329 Deallocate in Write Zeroes: Not Supported 00:07:38.329 Deallocated Guard Field: 0xFFFF 00:07:38.329 Flush: Supported 00:07:38.329 Reservation: Not Supported 00:07:38.329 Namespace Sharing Capabilities: Private 00:07:38.329 Size (in LBAs): 1048576 (4GiB) 00:07:38.329 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.329 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.329 Thin Provisioning: Not Supported 00:07:38.329 Per-NS Atomic Units: No 00:07:38.329 Maximum Single Source Range Length: 128 00:07:38.329 Maximum Copy Length: 128 00:07:38.329 Maximum Source Range Count: 128 00:07:38.329 NGUID/EUI64 Never Reused: No 00:07:38.329 Namespace Write Protected: No 00:07:38.329 Number of LBA Formats: 8 00:07:38.329 Current LBA Format: LBA Format #04 00:07:38.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.329 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.329 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.329 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.329 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.329 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.329 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.329 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.329 00:07:38.329 NVM Specific Namespace Data 00:07:38.329 =========================== 00:07:38.329 Logical Block Storage Tag Mask: 0 00:07:38.329 Protection Information Capabilities: 00:07:38.329 16b Guard Protection Information Storage Tag Support: No 00:07:38.329 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.329 Storage Tag Check Read Support: No 00:07:38.329 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Namespace ID:2 00:07:38.329 Error Recovery Timeout: Unlimited 00:07:38.329 Command Set Identifier: NVM (00h) 00:07:38.329 Deallocate: Supported 00:07:38.329 Deallocated/Unwritten Error: Supported 00:07:38.329 Deallocated Read Value: All 0x00 00:07:38.329 Deallocate in Write Zeroes: Not Supported 00:07:38.329 Deallocated Guard Field: 0xFFFF 00:07:38.329 Flush: Supported 00:07:38.329 Reservation: Not Supported 00:07:38.329 Namespace Sharing Capabilities: Private 00:07:38.329 Size (in LBAs): 1048576 (4GiB) 00:07:38.329 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.329 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.329 Thin Provisioning: Not Supported 00:07:38.329 Per-NS Atomic Units: No 00:07:38.329 Maximum Single Source Range Length: 128 00:07:38.329 Maximum Copy Length: 128 00:07:38.329 Maximum Source Range Count: 128 00:07:38.329 NGUID/EUI64 Never Reused: No 00:07:38.329 Namespace Write Protected: No 00:07:38.329 Number of LBA Formats: 8 00:07:38.329 Current LBA Format: LBA Format #04 00:07:38.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.329 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.329 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.329 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.329 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.329 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.329 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.329 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.329 00:07:38.329 NVM Specific Namespace Data 00:07:38.329 =========================== 00:07:38.329 Logical Block Storage Tag Mask: 0 00:07:38.329 Protection Information Capabilities: 00:07:38.329 16b Guard Protection Information Storage Tag Support: No 00:07:38.329 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.329 Storage Tag Check Read Support: No 00:07:38.329 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Namespace ID:3 00:07:38.329 Error Recovery Timeout: Unlimited 00:07:38.329 Command Set Identifier: NVM (00h) 00:07:38.329 Deallocate: Supported 00:07:38.329 Deallocated/Unwritten Error: Supported 00:07:38.329 Deallocated Read Value: All 0x00 00:07:38.329 Deallocate in Write Zeroes: Not Supported 00:07:38.329 Deallocated Guard Field: 0xFFFF 00:07:38.329 Flush: Supported 00:07:38.329 Reservation: Not Supported 00:07:38.329 Namespace Sharing Capabilities: Private 00:07:38.329 Size (in LBAs): 1048576 (4GiB) 00:07:38.329 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.329 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.329 Thin Provisioning: Not Supported 00:07:38.329 Per-NS Atomic Units: No 00:07:38.329 Maximum Single Source Range Length: 128 00:07:38.329 Maximum Copy Length: 128 00:07:38.329 Maximum Source Range Count: 128 00:07:38.329 NGUID/EUI64 Never Reused: No 00:07:38.329 Namespace Write Protected: No 00:07:38.329 Number of LBA Formats: 8 00:07:38.329 Current LBA Format: LBA Format #04 00:07:38.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.329 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.329 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.329 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.329 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.329 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.329 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.329 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.329 00:07:38.329 NVM Specific Namespace Data 00:07:38.329 =========================== 00:07:38.329 Logical Block Storage Tag Mask: 0 00:07:38.329 Protection Information Capabilities: 00:07:38.329 16b Guard Protection Information Storage Tag Support: No 00:07:38.329 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.329 Storage Tag Check Read Support: No 00:07:38.329 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.329 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.329 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:38.587 ===================================================== 00:07:38.587 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:38.587 ===================================================== 00:07:38.587 Controller Capabilities/Features 00:07:38.587 ================================ 00:07:38.587 Vendor ID: 1b36 00:07:38.587 Subsystem Vendor ID: 1af4 00:07:38.587 Serial Number: 12340 00:07:38.587 Model Number: QEMU NVMe Ctrl 00:07:38.587 Firmware Version: 8.0.0 00:07:38.587 Recommended Arb Burst: 6 00:07:38.587 IEEE OUI Identifier: 00 54 52 00:07:38.587 Multi-path I/O 00:07:38.587 May have multiple subsystem ports: No 00:07:38.587 May have multiple controllers: No 00:07:38.587 Associated with SR-IOV VF: No 00:07:38.587 Max Data Transfer Size: 524288 00:07:38.587 Max Number of Namespaces: 256 00:07:38.587 Max Number of I/O Queues: 64 00:07:38.587 NVMe Specification Version (VS): 1.4 00:07:38.587 NVMe Specification Version (Identify): 1.4 00:07:38.587 Maximum Queue Entries: 2048 00:07:38.587 Contiguous Queues Required: Yes 00:07:38.587 Arbitration Mechanisms Supported 00:07:38.587 Weighted Round Robin: Not Supported 00:07:38.587 Vendor Specific: Not Supported 00:07:38.587 Reset Timeout: 7500 ms 00:07:38.587 Doorbell Stride: 4 bytes 00:07:38.587 NVM Subsystem Reset: Not Supported 00:07:38.587 Command Sets Supported 00:07:38.587 NVM Command Set: Supported 00:07:38.587 Boot Partition: Not Supported 00:07:38.587 Memory Page Size Minimum: 4096 bytes 00:07:38.587 Memory Page Size Maximum: 65536 bytes 00:07:38.587 Persistent Memory Region: Not Supported 00:07:38.587 Optional Asynchronous Events Supported 00:07:38.587 Namespace Attribute Notices: Supported 00:07:38.587 Firmware Activation Notices: Not Supported 00:07:38.587 ANA Change Notices: Not Supported 00:07:38.587 PLE Aggregate Log Change Notices: Not Supported 00:07:38.587 LBA Status Info Alert Notices: Not Supported 00:07:38.587 EGE Aggregate Log Change Notices: Not Supported 00:07:38.587 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.587 Zone Descriptor Change Notices: Not Supported 00:07:38.587 Discovery Log Change Notices: Not Supported 00:07:38.587 Controller Attributes 00:07:38.587 128-bit Host Identifier: Not Supported 00:07:38.587 Non-Operational Permissive Mode: Not Supported 00:07:38.587 NVM Sets: Not Supported 00:07:38.587 Read Recovery Levels: Not Supported 00:07:38.587 Endurance Groups: Not Supported 00:07:38.587 Predictable Latency Mode: Not Supported 00:07:38.587 Traffic Based Keep ALive: Not Supported 00:07:38.587 Namespace Granularity: Not Supported 00:07:38.587 SQ Associations: Not Supported 00:07:38.587 UUID List: Not Supported 00:07:38.587 Multi-Domain Subsystem: Not Supported 00:07:38.587 Fixed Capacity Management: Not Supported 00:07:38.588 Variable Capacity Management: Not Supported 00:07:38.588 Delete Endurance Group: Not Supported 00:07:38.588 Delete NVM Set: Not Supported 00:07:38.588 Extended LBA Formats Supported: Supported 00:07:38.588 Flexible Data Placement Supported: Not Supported 00:07:38.588 00:07:38.588 Controller Memory Buffer Support 00:07:38.588 ================================ 00:07:38.588 Supported: No 00:07:38.588 00:07:38.588 Persistent Memory Region Support 00:07:38.588 ================================ 00:07:38.588 Supported: No 00:07:38.588 00:07:38.588 Admin Command Set Attributes 00:07:38.588 ============================ 00:07:38.588 Security Send/Receive: Not Supported 00:07:38.588 Format NVM: Supported 00:07:38.588 Firmware Activate/Download: Not Supported 00:07:38.588 Namespace Management: Supported 00:07:38.588 Device Self-Test: Not Supported 00:07:38.588 Directives: Supported 00:07:38.588 NVMe-MI: Not Supported 00:07:38.588 Virtualization Management: Not Supported 00:07:38.588 Doorbell Buffer Config: Supported 00:07:38.588 Get LBA Status Capability: Not Supported 00:07:38.588 Command & Feature Lockdown Capability: Not Supported 00:07:38.588 Abort Command Limit: 4 00:07:38.588 Async Event Request Limit: 4 00:07:38.588 Number of Firmware Slots: N/A 00:07:38.588 Firmware Slot 1 Read-Only: N/A 00:07:38.588 Firmware Activation Without Reset: N/A 00:07:38.588 Multiple Update Detection Support: N/A 00:07:38.588 Firmware Update Granularity: No Information Provided 00:07:38.588 Per-Namespace SMART Log: Yes 00:07:38.588 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.588 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:38.588 Command Effects Log Page: Supported 00:07:38.588 Get Log Page Extended Data: Supported 00:07:38.588 Telemetry Log Pages: Not Supported 00:07:38.588 Persistent Event Log Pages: Not Supported 00:07:38.588 Supported Log Pages Log Page: May Support 00:07:38.588 Commands Supported & Effects Log Page: Not Supported 00:07:38.588 Feature Identifiers & Effects Log Page:May Support 00:07:38.588 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.588 Data Area 4 for Telemetry Log: Not Supported 00:07:38.588 Error Log Page Entries Supported: 1 00:07:38.588 Keep Alive: Not Supported 00:07:38.588 00:07:38.588 NVM Command Set Attributes 00:07:38.588 ========================== 00:07:38.588 Submission Queue Entry Size 00:07:38.588 Max: 64 00:07:38.588 Min: 64 00:07:38.588 Completion Queue Entry Size 00:07:38.588 Max: 16 00:07:38.588 Min: 16 00:07:38.588 Number of Namespaces: 256 00:07:38.588 Compare Command: Supported 00:07:38.588 Write Uncorrectable Command: Not Supported 00:07:38.588 Dataset Management Command: Supported 00:07:38.588 Write Zeroes Command: Supported 00:07:38.588 Set Features Save Field: Supported 00:07:38.588 Reservations: Not Supported 00:07:38.588 Timestamp: Supported 00:07:38.588 Copy: Supported 00:07:38.588 Volatile Write Cache: Present 00:07:38.588 Atomic Write Unit (Normal): 1 00:07:38.588 Atomic Write Unit (PFail): 1 00:07:38.588 Atomic Compare & Write Unit: 1 00:07:38.588 Fused Compare & Write: Not Supported 00:07:38.588 Scatter-Gather List 00:07:38.588 SGL Command Set: Supported 00:07:38.588 SGL Keyed: Not Supported 00:07:38.588 SGL Bit Bucket Descriptor: Not Supported 00:07:38.588 SGL Metadata Pointer: Not Supported 00:07:38.588 Oversized SGL: Not Supported 00:07:38.588 SGL Metadata Address: Not Supported 00:07:38.588 SGL Offset: Not Supported 00:07:38.588 Transport SGL Data Block: Not Supported 00:07:38.588 Replay Protected Memory Block: Not Supported 00:07:38.588 00:07:38.588 Firmware Slot Information 00:07:38.588 ========================= 00:07:38.588 Active slot: 1 00:07:38.588 Slot 1 Firmware Revision: 1.0 00:07:38.588 00:07:38.588 00:07:38.588 Commands Supported and Effects 00:07:38.588 ============================== 00:07:38.588 Admin Commands 00:07:38.588 -------------- 00:07:38.588 Delete I/O Submission Queue (00h): Supported 00:07:38.588 Create I/O Submission Queue (01h): Supported 00:07:38.588 Get Log Page (02h): Supported 00:07:38.588 Delete I/O Completion Queue (04h): Supported 00:07:38.588 Create I/O Completion Queue (05h): Supported 00:07:38.588 Identify (06h): Supported 00:07:38.588 Abort (08h): Supported 00:07:38.588 Set Features (09h): Supported 00:07:38.588 Get Features (0Ah): Supported 00:07:38.588 Asynchronous Event Request (0Ch): Supported 00:07:38.588 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.588 Directive Send (19h): Supported 00:07:38.588 Directive Receive (1Ah): Supported 00:07:38.588 Virtualization Management (1Ch): Supported 00:07:38.588 Doorbell Buffer Config (7Ch): Supported 00:07:38.588 Format NVM (80h): Supported LBA-Change 00:07:38.588 I/O Commands 00:07:38.588 ------------ 00:07:38.588 Flush (00h): Supported LBA-Change 00:07:38.588 Write (01h): Supported LBA-Change 00:07:38.588 Read (02h): Supported 00:07:38.588 Compare (05h): Supported 00:07:38.588 Write Zeroes (08h): Supported LBA-Change 00:07:38.588 Dataset Management (09h): Supported LBA-Change 00:07:38.588 Unknown (0Ch): Supported 00:07:38.588 Unknown (12h): Supported 00:07:38.588 Copy (19h): Supported LBA-Change 00:07:38.588 Unknown (1Dh): Supported LBA-Change 00:07:38.588 00:07:38.588 Error Log 00:07:38.588 ========= 00:07:38.588 00:07:38.588 Arbitration 00:07:38.588 =========== 00:07:38.588 Arbitration Burst: no limit 00:07:38.588 00:07:38.588 Power Management 00:07:38.588 ================ 00:07:38.588 Number of Power States: 1 00:07:38.588 Current Power State: Power State #0 00:07:38.588 Power State #0: 00:07:38.588 Max Power: 25.00 W 00:07:38.588 Non-Operational State: Operational 00:07:38.588 Entry Latency: 16 microseconds 00:07:38.588 Exit Latency: 4 microseconds 00:07:38.588 Relative Read Throughput: 0 00:07:38.588 Relative Read Latency: 0 00:07:38.588 Relative Write Throughput: 0 00:07:38.588 Relative Write Latency: 0 00:07:38.588 Idle Power: Not Reported 00:07:38.588 Active Power: Not Reported 00:07:38.588 Non-Operational Permissive Mode: Not Supported 00:07:38.588 00:07:38.588 Health Information 00:07:38.588 ================== 00:07:38.588 Critical Warnings: 00:07:38.588 Available Spare Space: OK 00:07:38.588 Temperature: OK 00:07:38.588 Device Reliability: OK 00:07:38.588 Read Only: No 00:07:38.588 Volatile Memory Backup: OK 00:07:38.588 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.588 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.588 Available Spare: 0% 00:07:38.588 Available Spare Threshold: 0% 00:07:38.588 Life Percentage Used: 0% 00:07:38.588 Data Units Read: 652 00:07:38.588 Data Units Written: 581 00:07:38.588 Host Read Commands: 36856 00:07:38.588 Host Write Commands: 36642 00:07:38.588 Controller Busy Time: 0 minutes 00:07:38.588 Power Cycles: 0 00:07:38.588 Power On Hours: 0 hours 00:07:38.588 Unsafe Shutdowns: 0 00:07:38.589 Unrecoverable Media Errors: 0 00:07:38.589 Lifetime Error Log Entries: 0 00:07:38.589 Warning Temperature Time: 0 minutes 00:07:38.589 Critical Temperature Time: 0 minutes 00:07:38.589 00:07:38.589 Number of Queues 00:07:38.589 ================ 00:07:38.589 Number of I/O Submission Queues: 64 00:07:38.589 Number of I/O Completion Queues: 64 00:07:38.589 00:07:38.589 ZNS Specific Controller Data 00:07:38.589 ============================ 00:07:38.589 Zone Append Size Limit: 0 00:07:38.589 00:07:38.589 00:07:38.589 Active Namespaces 00:07:38.589 ================= 00:07:38.589 Namespace ID:1 00:07:38.589 Error Recovery Timeout: Unlimited 00:07:38.589 Command Set Identifier: NVM (00h) 00:07:38.589 Deallocate: Supported 00:07:38.589 Deallocated/Unwritten Error: Supported 00:07:38.589 Deallocated Read Value: All 0x00 00:07:38.589 Deallocate in Write Zeroes: Not Supported 00:07:38.589 Deallocated Guard Field: 0xFFFF 00:07:38.589 Flush: Supported 00:07:38.589 Reservation: Not Supported 00:07:38.589 Metadata Transferred as: Separate Metadata Buffer 00:07:38.589 Namespace Sharing Capabilities: Private 00:07:38.589 Size (in LBAs): 1548666 (5GiB) 00:07:38.589 Capacity (in LBAs): 1548666 (5GiB) 00:07:38.589 Utilization (in LBAs): 1548666 (5GiB) 00:07:38.589 Thin Provisioning: Not Supported 00:07:38.589 Per-NS Atomic Units: No 00:07:38.589 Maximum Single Source Range Length: 128 00:07:38.589 Maximum Copy Length: 128 00:07:38.589 Maximum Source Range Count: 128 00:07:38.589 NGUID/EUI64 Never Reused: No 00:07:38.589 Namespace Write Protected: No 00:07:38.589 Number of LBA Formats: 8 00:07:38.589 Current LBA Format: LBA Format #07 00:07:38.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.589 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.589 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.589 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.589 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.589 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.589 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.589 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.589 00:07:38.589 NVM Specific Namespace Data 00:07:38.589 =========================== 00:07:38.589 Logical Block Storage Tag Mask: 0 00:07:38.589 Protection Information Capabilities: 00:07:38.589 16b Guard Protection Information Storage Tag Support: No 00:07:38.589 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.589 Storage Tag Check Read Support: No 00:07:38.589 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.589 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.589 17:48:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:38.847 ===================================================== 00:07:38.847 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:38.847 ===================================================== 00:07:38.847 Controller Capabilities/Features 00:07:38.847 ================================ 00:07:38.847 Vendor ID: 1b36 00:07:38.847 Subsystem Vendor ID: 1af4 00:07:38.847 Serial Number: 12341 00:07:38.847 Model Number: QEMU NVMe Ctrl 00:07:38.847 Firmware Version: 8.0.0 00:07:38.847 Recommended Arb Burst: 6 00:07:38.847 IEEE OUI Identifier: 00 54 52 00:07:38.847 Multi-path I/O 00:07:38.847 May have multiple subsystem ports: No 00:07:38.847 May have multiple controllers: No 00:07:38.847 Associated with SR-IOV VF: No 00:07:38.847 Max Data Transfer Size: 524288 00:07:38.847 Max Number of Namespaces: 256 00:07:38.847 Max Number of I/O Queues: 64 00:07:38.847 NVMe Specification Version (VS): 1.4 00:07:38.847 NVMe Specification Version (Identify): 1.4 00:07:38.847 Maximum Queue Entries: 2048 00:07:38.847 Contiguous Queues Required: Yes 00:07:38.847 Arbitration Mechanisms Supported 00:07:38.847 Weighted Round Robin: Not Supported 00:07:38.847 Vendor Specific: Not Supported 00:07:38.847 Reset Timeout: 7500 ms 00:07:38.847 Doorbell Stride: 4 bytes 00:07:38.847 NVM Subsystem Reset: Not Supported 00:07:38.847 Command Sets Supported 00:07:38.847 NVM Command Set: Supported 00:07:38.847 Boot Partition: Not Supported 00:07:38.847 Memory Page Size Minimum: 4096 bytes 00:07:38.847 Memory Page Size Maximum: 65536 bytes 00:07:38.847 Persistent Memory Region: Not Supported 00:07:38.847 Optional Asynchronous Events Supported 00:07:38.847 Namespace Attribute Notices: Supported 00:07:38.847 Firmware Activation Notices: Not Supported 00:07:38.847 ANA Change Notices: Not Supported 00:07:38.847 PLE Aggregate Log Change Notices: Not Supported 00:07:38.847 LBA Status Info Alert Notices: Not Supported 00:07:38.847 EGE Aggregate Log Change Notices: Not Supported 00:07:38.847 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.847 Zone Descriptor Change Notices: Not Supported 00:07:38.847 Discovery Log Change Notices: Not Supported 00:07:38.847 Controller Attributes 00:07:38.847 128-bit Host Identifier: Not Supported 00:07:38.847 Non-Operational Permissive Mode: Not Supported 00:07:38.847 NVM Sets: Not Supported 00:07:38.847 Read Recovery Levels: Not Supported 00:07:38.847 Endurance Groups: Not Supported 00:07:38.847 Predictable Latency Mode: Not Supported 00:07:38.847 Traffic Based Keep ALive: Not Supported 00:07:38.847 Namespace Granularity: Not Supported 00:07:38.847 SQ Associations: Not Supported 00:07:38.847 UUID List: Not Supported 00:07:38.847 Multi-Domain Subsystem: Not Supported 00:07:38.847 Fixed Capacity Management: Not Supported 00:07:38.847 Variable Capacity Management: Not Supported 00:07:38.847 Delete Endurance Group: Not Supported 00:07:38.847 Delete NVM Set: Not Supported 00:07:38.847 Extended LBA Formats Supported: Supported 00:07:38.847 Flexible Data Placement Supported: Not Supported 00:07:38.847 00:07:38.847 Controller Memory Buffer Support 00:07:38.847 ================================ 00:07:38.847 Supported: No 00:07:38.847 00:07:38.847 Persistent Memory Region Support 00:07:38.847 ================================ 00:07:38.847 Supported: No 00:07:38.847 00:07:38.847 Admin Command Set Attributes 00:07:38.847 ============================ 00:07:38.847 Security Send/Receive: Not Supported 00:07:38.847 Format NVM: Supported 00:07:38.847 Firmware Activate/Download: Not Supported 00:07:38.847 Namespace Management: Supported 00:07:38.847 Device Self-Test: Not Supported 00:07:38.847 Directives: Supported 00:07:38.847 NVMe-MI: Not Supported 00:07:38.847 Virtualization Management: Not Supported 00:07:38.847 Doorbell Buffer Config: Supported 00:07:38.847 Get LBA Status Capability: Not Supported 00:07:38.847 Command & Feature Lockdown Capability: Not Supported 00:07:38.847 Abort Command Limit: 4 00:07:38.847 Async Event Request Limit: 4 00:07:38.847 Number of Firmware Slots: N/A 00:07:38.847 Firmware Slot 1 Read-Only: N/A 00:07:38.847 Firmware Activation Without Reset: N/A 00:07:38.847 Multiple Update Detection Support: N/A 00:07:38.847 Firmware Update Granularity: No Information Provided 00:07:38.847 Per-Namespace SMART Log: Yes 00:07:38.847 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.847 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:38.847 Command Effects Log Page: Supported 00:07:38.847 Get Log Page Extended Data: Supported 00:07:38.847 Telemetry Log Pages: Not Supported 00:07:38.847 Persistent Event Log Pages: Not Supported 00:07:38.847 Supported Log Pages Log Page: May Support 00:07:38.847 Commands Supported & Effects Log Page: Not Supported 00:07:38.847 Feature Identifiers & Effects Log Page:May Support 00:07:38.847 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.847 Data Area 4 for Telemetry Log: Not Supported 00:07:38.847 Error Log Page Entries Supported: 1 00:07:38.847 Keep Alive: Not Supported 00:07:38.847 00:07:38.847 NVM Command Set Attributes 00:07:38.847 ========================== 00:07:38.847 Submission Queue Entry Size 00:07:38.847 Max: 64 00:07:38.848 Min: 64 00:07:38.848 Completion Queue Entry Size 00:07:38.848 Max: 16 00:07:38.848 Min: 16 00:07:38.848 Number of Namespaces: 256 00:07:38.848 Compare Command: Supported 00:07:38.848 Write Uncorrectable Command: Not Supported 00:07:38.848 Dataset Management Command: Supported 00:07:38.848 Write Zeroes Command: Supported 00:07:38.848 Set Features Save Field: Supported 00:07:38.848 Reservations: Not Supported 00:07:38.848 Timestamp: Supported 00:07:38.848 Copy: Supported 00:07:38.848 Volatile Write Cache: Present 00:07:38.848 Atomic Write Unit (Normal): 1 00:07:38.848 Atomic Write Unit (PFail): 1 00:07:38.848 Atomic Compare & Write Unit: 1 00:07:38.848 Fused Compare & Write: Not Supported 00:07:38.848 Scatter-Gather List 00:07:38.848 SGL Command Set: Supported 00:07:38.848 SGL Keyed: Not Supported 00:07:38.848 SGL Bit Bucket Descriptor: Not Supported 00:07:38.848 SGL Metadata Pointer: Not Supported 00:07:38.848 Oversized SGL: Not Supported 00:07:38.848 SGL Metadata Address: Not Supported 00:07:38.848 SGL Offset: Not Supported 00:07:38.848 Transport SGL Data Block: Not Supported 00:07:38.848 Replay Protected Memory Block: Not Supported 00:07:38.848 00:07:38.848 Firmware Slot Information 00:07:38.848 ========================= 00:07:38.848 Active slot: 1 00:07:38.848 Slot 1 Firmware Revision: 1.0 00:07:38.848 00:07:38.848 00:07:38.848 Commands Supported and Effects 00:07:38.848 ============================== 00:07:38.848 Admin Commands 00:07:38.848 -------------- 00:07:38.848 Delete I/O Submission Queue (00h): Supported 00:07:38.848 Create I/O Submission Queue (01h): Supported 00:07:38.848 Get Log Page (02h): Supported 00:07:38.848 Delete I/O Completion Queue (04h): Supported 00:07:38.848 Create I/O Completion Queue (05h): Supported 00:07:38.848 Identify (06h): Supported 00:07:38.848 Abort (08h): Supported 00:07:38.848 Set Features (09h): Supported 00:07:38.848 Get Features (0Ah): Supported 00:07:38.848 Asynchronous Event Request (0Ch): Supported 00:07:38.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.848 Directive Send (19h): Supported 00:07:38.848 Directive Receive (1Ah): Supported 00:07:38.848 Virtualization Management (1Ch): Supported 00:07:38.848 Doorbell Buffer Config (7Ch): Supported 00:07:38.848 Format NVM (80h): Supported LBA-Change 00:07:38.848 I/O Commands 00:07:38.848 ------------ 00:07:38.848 Flush (00h): Supported LBA-Change 00:07:38.848 Write (01h): Supported LBA-Change 00:07:38.848 Read (02h): Supported 00:07:38.848 Compare (05h): Supported 00:07:38.848 Write Zeroes (08h): Supported LBA-Change 00:07:38.848 Dataset Management (09h): Supported LBA-Change 00:07:38.848 Unknown (0Ch): Supported 00:07:38.848 Unknown (12h): Supported 00:07:38.848 Copy (19h): Supported LBA-Change 00:07:38.848 Unknown (1Dh): Supported LBA-Change 00:07:38.848 00:07:38.848 Error Log 00:07:38.848 ========= 00:07:38.848 00:07:38.848 Arbitration 00:07:38.848 =========== 00:07:38.848 Arbitration Burst: no limit 00:07:38.848 00:07:38.848 Power Management 00:07:38.848 ================ 00:07:38.848 Number of Power States: 1 00:07:38.848 Current Power State: Power State #0 00:07:38.848 Power State #0: 00:07:38.848 Max Power: 25.00 W 00:07:38.848 Non-Operational State: Operational 00:07:38.848 Entry Latency: 16 microseconds 00:07:38.848 Exit Latency: 4 microseconds 00:07:38.848 Relative Read Throughput: 0 00:07:38.848 Relative Read Latency: 0 00:07:38.848 Relative Write Throughput: 0 00:07:38.848 Relative Write Latency: 0 00:07:38.848 Idle Power: Not Reported 00:07:38.848 Active Power: Not Reported 00:07:38.848 Non-Operational Permissive Mode: Not Supported 00:07:38.848 00:07:38.848 Health Information 00:07:38.848 ================== 00:07:38.848 Critical Warnings: 00:07:38.848 Available Spare Space: OK 00:07:38.848 Temperature: OK 00:07:38.848 Device Reliability: OK 00:07:38.848 Read Only: No 00:07:38.848 Volatile Memory Backup: OK 00:07:38.848 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.848 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.848 Available Spare: 0% 00:07:38.848 Available Spare Threshold: 0% 00:07:38.848 Life Percentage Used: 0% 00:07:38.848 Data Units Read: 1038 00:07:38.848 Data Units Written: 911 00:07:38.848 Host Read Commands: 56769 00:07:38.848 Host Write Commands: 55666 00:07:38.848 Controller Busy Time: 0 minutes 00:07:38.848 Power Cycles: 0 00:07:38.848 Power On Hours: 0 hours 00:07:38.848 Unsafe Shutdowns: 0 00:07:38.848 Unrecoverable Media Errors: 0 00:07:38.848 Lifetime Error Log Entries: 0 00:07:38.848 Warning Temperature Time: 0 minutes 00:07:38.848 Critical Temperature Time: 0 minutes 00:07:38.848 00:07:38.848 Number of Queues 00:07:38.848 ================ 00:07:38.848 Number of I/O Submission Queues: 64 00:07:38.848 Number of I/O Completion Queues: 64 00:07:38.848 00:07:38.848 ZNS Specific Controller Data 00:07:38.848 ============================ 00:07:38.848 Zone Append Size Limit: 0 00:07:38.848 00:07:38.848 00:07:38.848 Active Namespaces 00:07:38.848 ================= 00:07:38.848 Namespace ID:1 00:07:38.848 Error Recovery Timeout: Unlimited 00:07:38.848 Command Set Identifier: NVM (00h) 00:07:38.848 Deallocate: Supported 00:07:38.848 Deallocated/Unwritten Error: Supported 00:07:38.848 Deallocated Read Value: All 0x00 00:07:38.848 Deallocate in Write Zeroes: Not Supported 00:07:38.848 Deallocated Guard Field: 0xFFFF 00:07:38.848 Flush: Supported 00:07:38.848 Reservation: Not Supported 00:07:38.848 Namespace Sharing Capabilities: Private 00:07:38.848 Size (in LBAs): 1310720 (5GiB) 00:07:38.848 Capacity (in LBAs): 1310720 (5GiB) 00:07:38.848 Utilization (in LBAs): 1310720 (5GiB) 00:07:38.848 Thin Provisioning: Not Supported 00:07:38.848 Per-NS Atomic Units: No 00:07:38.848 Maximum Single Source Range Length: 128 00:07:38.848 Maximum Copy Length: 128 00:07:38.848 Maximum Source Range Count: 128 00:07:38.848 NGUID/EUI64 Never Reused: No 00:07:38.848 Namespace Write Protected: No 00:07:38.848 Number of LBA Formats: 8 00:07:38.848 Current LBA Format: LBA Format #04 00:07:38.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.849 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.849 00:07:38.849 NVM Specific Namespace Data 00:07:38.849 =========================== 00:07:38.849 Logical Block Storage Tag Mask: 0 00:07:38.849 Protection Information Capabilities: 00:07:38.849 16b Guard Protection Information Storage Tag Support: No 00:07:38.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.849 Storage Tag Check Read Support: No 00:07:38.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.849 17:48:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.849 17:48:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:39.108 ===================================================== 00:07:39.108 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.108 ===================================================== 00:07:39.108 Controller Capabilities/Features 00:07:39.108 ================================ 00:07:39.108 Vendor ID: 1b36 00:07:39.108 Subsystem Vendor ID: 1af4 00:07:39.108 Serial Number: 12342 00:07:39.108 Model Number: QEMU NVMe Ctrl 00:07:39.108 Firmware Version: 8.0.0 00:07:39.108 Recommended Arb Burst: 6 00:07:39.108 IEEE OUI Identifier: 00 54 52 00:07:39.108 Multi-path I/O 00:07:39.108 May have multiple subsystem ports: No 00:07:39.108 May have multiple controllers: No 00:07:39.108 Associated with SR-IOV VF: No 00:07:39.108 Max Data Transfer Size: 524288 00:07:39.108 Max Number of Namespaces: 256 00:07:39.108 Max Number of I/O Queues: 64 00:07:39.108 NVMe Specification Version (VS): 1.4 00:07:39.108 NVMe Specification Version (Identify): 1.4 00:07:39.108 Maximum Queue Entries: 2048 00:07:39.108 Contiguous Queues Required: Yes 00:07:39.108 Arbitration Mechanisms Supported 00:07:39.108 Weighted Round Robin: Not Supported 00:07:39.108 Vendor Specific: Not Supported 00:07:39.108 Reset Timeout: 7500 ms 00:07:39.108 Doorbell Stride: 4 bytes 00:07:39.108 NVM Subsystem Reset: Not Supported 00:07:39.108 Command Sets Supported 00:07:39.108 NVM Command Set: Supported 00:07:39.108 Boot Partition: Not Supported 00:07:39.108 Memory Page Size Minimum: 4096 bytes 00:07:39.108 Memory Page Size Maximum: 65536 bytes 00:07:39.108 Persistent Memory Region: Not Supported 00:07:39.108 Optional Asynchronous Events Supported 00:07:39.108 Namespace Attribute Notices: Supported 00:07:39.108 Firmware Activation Notices: Not Supported 00:07:39.108 ANA Change Notices: Not Supported 00:07:39.108 PLE Aggregate Log Change Notices: Not Supported 00:07:39.108 LBA Status Info Alert Notices: Not Supported 00:07:39.108 EGE Aggregate Log Change Notices: Not Supported 00:07:39.108 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.108 Zone Descriptor Change Notices: Not Supported 00:07:39.108 Discovery Log Change Notices: Not Supported 00:07:39.108 Controller Attributes 00:07:39.108 128-bit Host Identifier: Not Supported 00:07:39.108 Non-Operational Permissive Mode: Not Supported 00:07:39.108 NVM Sets: Not Supported 00:07:39.108 Read Recovery Levels: Not Supported 00:07:39.108 Endurance Groups: Not Supported 00:07:39.108 Predictable Latency Mode: Not Supported 00:07:39.108 Traffic Based Keep ALive: Not Supported 00:07:39.108 Namespace Granularity: Not Supported 00:07:39.108 SQ Associations: Not Supported 00:07:39.108 UUID List: Not Supported 00:07:39.108 Multi-Domain Subsystem: Not Supported 00:07:39.108 Fixed Capacity Management: Not Supported 00:07:39.108 Variable Capacity Management: Not Supported 00:07:39.108 Delete Endurance Group: Not Supported 00:07:39.108 Delete NVM Set: Not Supported 00:07:39.108 Extended LBA Formats Supported: Supported 00:07:39.108 Flexible Data Placement Supported: Not Supported 00:07:39.108 00:07:39.108 Controller Memory Buffer Support 00:07:39.108 ================================ 00:07:39.108 Supported: No 00:07:39.108 00:07:39.108 Persistent Memory Region Support 00:07:39.108 ================================ 00:07:39.108 Supported: No 00:07:39.108 00:07:39.108 Admin Command Set Attributes 00:07:39.108 ============================ 00:07:39.108 Security Send/Receive: Not Supported 00:07:39.108 Format NVM: Supported 00:07:39.108 Firmware Activate/Download: Not Supported 00:07:39.108 Namespace Management: Supported 00:07:39.108 Device Self-Test: Not Supported 00:07:39.108 Directives: Supported 00:07:39.108 NVMe-MI: Not Supported 00:07:39.108 Virtualization Management: Not Supported 00:07:39.108 Doorbell Buffer Config: Supported 00:07:39.108 Get LBA Status Capability: Not Supported 00:07:39.108 Command & Feature Lockdown Capability: Not Supported 00:07:39.108 Abort Command Limit: 4 00:07:39.108 Async Event Request Limit: 4 00:07:39.108 Number of Firmware Slots: N/A 00:07:39.108 Firmware Slot 1 Read-Only: N/A 00:07:39.108 Firmware Activation Without Reset: N/A 00:07:39.108 Multiple Update Detection Support: N/A 00:07:39.108 Firmware Update Granularity: No Information Provided 00:07:39.108 Per-Namespace SMART Log: Yes 00:07:39.108 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.108 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:39.108 Command Effects Log Page: Supported 00:07:39.108 Get Log Page Extended Data: Supported 00:07:39.108 Telemetry Log Pages: Not Supported 00:07:39.108 Persistent Event Log Pages: Not Supported 00:07:39.108 Supported Log Pages Log Page: May Support 00:07:39.108 Commands Supported & Effects Log Page: Not Supported 00:07:39.108 Feature Identifiers & Effects Log Page:May Support 00:07:39.108 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.108 Data Area 4 for Telemetry Log: Not Supported 00:07:39.108 Error Log Page Entries Supported: 1 00:07:39.108 Keep Alive: Not Supported 00:07:39.108 00:07:39.108 NVM Command Set Attributes 00:07:39.108 ========================== 00:07:39.108 Submission Queue Entry Size 00:07:39.108 Max: 64 00:07:39.108 Min: 64 00:07:39.108 Completion Queue Entry Size 00:07:39.108 Max: 16 00:07:39.108 Min: 16 00:07:39.108 Number of Namespaces: 256 00:07:39.108 Compare Command: Supported 00:07:39.108 Write Uncorrectable Command: Not Supported 00:07:39.108 Dataset Management Command: Supported 00:07:39.108 Write Zeroes Command: Supported 00:07:39.108 Set Features Save Field: Supported 00:07:39.108 Reservations: Not Supported 00:07:39.108 Timestamp: Supported 00:07:39.108 Copy: Supported 00:07:39.108 Volatile Write Cache: Present 00:07:39.108 Atomic Write Unit (Normal): 1 00:07:39.108 Atomic Write Unit (PFail): 1 00:07:39.108 Atomic Compare & Write Unit: 1 00:07:39.108 Fused Compare & Write: Not Supported 00:07:39.108 Scatter-Gather List 00:07:39.108 SGL Command Set: Supported 00:07:39.108 SGL Keyed: Not Supported 00:07:39.108 SGL Bit Bucket Descriptor: Not Supported 00:07:39.108 SGL Metadata Pointer: Not Supported 00:07:39.108 Oversized SGL: Not Supported 00:07:39.108 SGL Metadata Address: Not Supported 00:07:39.108 SGL Offset: Not Supported 00:07:39.108 Transport SGL Data Block: Not Supported 00:07:39.108 Replay Protected Memory Block: Not Supported 00:07:39.108 00:07:39.108 Firmware Slot Information 00:07:39.108 ========================= 00:07:39.108 Active slot: 1 00:07:39.108 Slot 1 Firmware Revision: 1.0 00:07:39.108 00:07:39.108 00:07:39.108 Commands Supported and Effects 00:07:39.108 ============================== 00:07:39.108 Admin Commands 00:07:39.108 -------------- 00:07:39.108 Delete I/O Submission Queue (00h): Supported 00:07:39.108 Create I/O Submission Queue (01h): Supported 00:07:39.108 Get Log Page (02h): Supported 00:07:39.108 Delete I/O Completion Queue (04h): Supported 00:07:39.108 Create I/O Completion Queue (05h): Supported 00:07:39.108 Identify (06h): Supported 00:07:39.108 Abort (08h): Supported 00:07:39.108 Set Features (09h): Supported 00:07:39.108 Get Features (0Ah): Supported 00:07:39.108 Asynchronous Event Request (0Ch): Supported 00:07:39.108 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.108 Directive Send (19h): Supported 00:07:39.108 Directive Receive (1Ah): Supported 00:07:39.108 Virtualization Management (1Ch): Supported 00:07:39.108 Doorbell Buffer Config (7Ch): Supported 00:07:39.108 Format NVM (80h): Supported LBA-Change 00:07:39.108 I/O Commands 00:07:39.108 ------------ 00:07:39.108 Flush (00h): Supported LBA-Change 00:07:39.108 Write (01h): Supported LBA-Change 00:07:39.108 Read (02h): Supported 00:07:39.108 Compare (05h): Supported 00:07:39.108 Write Zeroes (08h): Supported LBA-Change 00:07:39.108 Dataset Management (09h): Supported LBA-Change 00:07:39.109 Unknown (0Ch): Supported 00:07:39.109 Unknown (12h): Supported 00:07:39.109 Copy (19h): Supported LBA-Change 00:07:39.109 Unknown (1Dh): Supported LBA-Change 00:07:39.109 00:07:39.109 Error Log 00:07:39.109 ========= 00:07:39.109 00:07:39.109 Arbitration 00:07:39.109 =========== 00:07:39.109 Arbitration Burst: no limit 00:07:39.109 00:07:39.109 Power Management 00:07:39.109 ================ 00:07:39.109 Number of Power States: 1 00:07:39.109 Current Power State: Power State #0 00:07:39.109 Power State #0: 00:07:39.109 Max Power: 25.00 W 00:07:39.109 Non-Operational State: Operational 00:07:39.109 Entry Latency: 16 microseconds 00:07:39.109 Exit Latency: 4 microseconds 00:07:39.109 Relative Read Throughput: 0 00:07:39.109 Relative Read Latency: 0 00:07:39.109 Relative Write Throughput: 0 00:07:39.109 Relative Write Latency: 0 00:07:39.109 Idle Power: Not Reported 00:07:39.109 Active Power: Not Reported 00:07:39.109 Non-Operational Permissive Mode: Not Supported 00:07:39.109 00:07:39.109 Health Information 00:07:39.109 ================== 00:07:39.109 Critical Warnings: 00:07:39.109 Available Spare Space: OK 00:07:39.109 Temperature: OK 00:07:39.109 Device Reliability: OK 00:07:39.109 Read Only: No 00:07:39.109 Volatile Memory Backup: OK 00:07:39.109 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.109 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.109 Available Spare: 0% 00:07:39.109 Available Spare Threshold: 0% 00:07:39.109 Life Percentage Used: 0% 00:07:39.109 Data Units Read: 2168 00:07:39.109 Data Units Written: 1955 00:07:39.109 Host Read Commands: 113174 00:07:39.109 Host Write Commands: 111443 00:07:39.109 Controller Busy Time: 0 minutes 00:07:39.109 Power Cycles: 0 00:07:39.109 Power On Hours: 0 hours 00:07:39.109 Unsafe Shutdowns: 0 00:07:39.109 Unrecoverable Media Errors: 0 00:07:39.109 Lifetime Error Log Entries: 0 00:07:39.109 Warning Temperature Time: 0 minutes 00:07:39.109 Critical Temperature Time: 0 minutes 00:07:39.109 00:07:39.109 Number of Queues 00:07:39.109 ================ 00:07:39.109 Number of I/O Submission Queues: 64 00:07:39.109 Number of I/O Completion Queues: 64 00:07:39.109 00:07:39.109 ZNS Specific Controller Data 00:07:39.109 ============================ 00:07:39.109 Zone Append Size Limit: 0 00:07:39.109 00:07:39.109 00:07:39.109 Active Namespaces 00:07:39.109 ================= 00:07:39.109 Namespace ID:1 00:07:39.109 Error Recovery Timeout: Unlimited 00:07:39.109 Command Set Identifier: NVM (00h) 00:07:39.109 Deallocate: Supported 00:07:39.109 Deallocated/Unwritten Error: Supported 00:07:39.109 Deallocated Read Value: All 0x00 00:07:39.109 Deallocate in Write Zeroes: Not Supported 00:07:39.109 Deallocated Guard Field: 0xFFFF 00:07:39.109 Flush: Supported 00:07:39.109 Reservation: Not Supported 00:07:39.109 Namespace Sharing Capabilities: Private 00:07:39.109 Size (in LBAs): 1048576 (4GiB) 00:07:39.109 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.109 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.109 Thin Provisioning: Not Supported 00:07:39.109 Per-NS Atomic Units: No 00:07:39.109 Maximum Single Source Range Length: 128 00:07:39.109 Maximum Copy Length: 128 00:07:39.109 Maximum Source Range Count: 128 00:07:39.109 NGUID/EUI64 Never Reused: No 00:07:39.109 Namespace Write Protected: No 00:07:39.109 Number of LBA Formats: 8 00:07:39.109 Current LBA Format: LBA Format #04 00:07:39.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.109 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.109 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.109 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.109 00:07:39.109 NVM Specific Namespace Data 00:07:39.109 =========================== 00:07:39.109 Logical Block Storage Tag Mask: 0 00:07:39.109 Protection Information Capabilities: 00:07:39.109 16b Guard Protection Information Storage Tag Support: No 00:07:39.109 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.109 Storage Tag Check Read Support: No 00:07:39.109 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Namespace ID:2 00:07:39.109 Error Recovery Timeout: Unlimited 00:07:39.109 Command Set Identifier: NVM (00h) 00:07:39.109 Deallocate: Supported 00:07:39.109 Deallocated/Unwritten Error: Supported 00:07:39.109 Deallocated Read Value: All 0x00 00:07:39.109 Deallocate in Write Zeroes: Not Supported 00:07:39.109 Deallocated Guard Field: 0xFFFF 00:07:39.109 Flush: Supported 00:07:39.109 Reservation: Not Supported 00:07:39.109 Namespace Sharing Capabilities: Private 00:07:39.109 Size (in LBAs): 1048576 (4GiB) 00:07:39.109 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.109 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.109 Thin Provisioning: Not Supported 00:07:39.109 Per-NS Atomic Units: No 00:07:39.109 Maximum Single Source Range Length: 128 00:07:39.109 Maximum Copy Length: 128 00:07:39.109 Maximum Source Range Count: 128 00:07:39.109 NGUID/EUI64 Never Reused: No 00:07:39.109 Namespace Write Protected: No 00:07:39.109 Number of LBA Formats: 8 00:07:39.109 Current LBA Format: LBA Format #04 00:07:39.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.109 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.109 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.109 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.109 00:07:39.109 NVM Specific Namespace Data 00:07:39.109 =========================== 00:07:39.109 Logical Block Storage Tag Mask: 0 00:07:39.109 Protection Information Capabilities: 00:07:39.109 16b Guard Protection Information Storage Tag Support: No 00:07:39.109 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.109 Storage Tag Check Read Support: No 00:07:39.109 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.109 Namespace ID:3 00:07:39.109 Error Recovery Timeout: Unlimited 00:07:39.109 Command Set Identifier: NVM (00h) 00:07:39.109 Deallocate: Supported 00:07:39.109 Deallocated/Unwritten Error: Supported 00:07:39.109 Deallocated Read Value: All 0x00 00:07:39.109 Deallocate in Write Zeroes: Not Supported 00:07:39.109 Deallocated Guard Field: 0xFFFF 00:07:39.109 Flush: Supported 00:07:39.109 Reservation: Not Supported 00:07:39.109 Namespace Sharing Capabilities: Private 00:07:39.109 Size (in LBAs): 1048576 (4GiB) 00:07:39.109 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.109 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.109 Thin Provisioning: Not Supported 00:07:39.109 Per-NS Atomic Units: No 00:07:39.109 Maximum Single Source Range Length: 128 00:07:39.109 Maximum Copy Length: 128 00:07:39.109 Maximum Source Range Count: 128 00:07:39.109 NGUID/EUI64 Never Reused: No 00:07:39.109 Namespace Write Protected: No 00:07:39.109 Number of LBA Formats: 8 00:07:39.109 Current LBA Format: LBA Format #04 00:07:39.109 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.109 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.109 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.109 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.109 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.109 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.109 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.109 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.109 00:07:39.109 NVM Specific Namespace Data 00:07:39.109 =========================== 00:07:39.109 Logical Block Storage Tag Mask: 0 00:07:39.109 Protection Information Capabilities: 00:07:39.109 16b Guard Protection Information Storage Tag Support: No 00:07:39.109 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.109 Storage Tag Check Read Support: No 00:07:39.110 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.110 17:48:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.110 17:48:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:39.110 ===================================================== 00:07:39.110 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.110 ===================================================== 00:07:39.110 Controller Capabilities/Features 00:07:39.110 ================================ 00:07:39.110 Vendor ID: 1b36 00:07:39.110 Subsystem Vendor ID: 1af4 00:07:39.110 Serial Number: 12343 00:07:39.110 Model Number: QEMU NVMe Ctrl 00:07:39.110 Firmware Version: 8.0.0 00:07:39.110 Recommended Arb Burst: 6 00:07:39.110 IEEE OUI Identifier: 00 54 52 00:07:39.110 Multi-path I/O 00:07:39.110 May have multiple subsystem ports: No 00:07:39.110 May have multiple controllers: Yes 00:07:39.110 Associated with SR-IOV VF: No 00:07:39.110 Max Data Transfer Size: 524288 00:07:39.110 Max Number of Namespaces: 256 00:07:39.110 Max Number of I/O Queues: 64 00:07:39.110 NVMe Specification Version (VS): 1.4 00:07:39.110 NVMe Specification Version (Identify): 1.4 00:07:39.110 Maximum Queue Entries: 2048 00:07:39.110 Contiguous Queues Required: Yes 00:07:39.110 Arbitration Mechanisms Supported 00:07:39.110 Weighted Round Robin: Not Supported 00:07:39.110 Vendor Specific: Not Supported 00:07:39.110 Reset Timeout: 7500 ms 00:07:39.110 Doorbell Stride: 4 bytes 00:07:39.110 NVM Subsystem Reset: Not Supported 00:07:39.110 Command Sets Supported 00:07:39.110 NVM Command Set: Supported 00:07:39.110 Boot Partition: Not Supported 00:07:39.110 Memory Page Size Minimum: 4096 bytes 00:07:39.110 Memory Page Size Maximum: 65536 bytes 00:07:39.110 Persistent Memory Region: Not Supported 00:07:39.110 Optional Asynchronous Events Supported 00:07:39.110 Namespace Attribute Notices: Supported 00:07:39.110 Firmware Activation Notices: Not Supported 00:07:39.110 ANA Change Notices: Not Supported 00:07:39.110 PLE Aggregate Log Change Notices: Not Supported 00:07:39.110 LBA Status Info Alert Notices: Not Supported 00:07:39.110 EGE Aggregate Log Change Notices: Not Supported 00:07:39.110 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.110 Zone Descriptor Change Notices: Not Supported 00:07:39.110 Discovery Log Change Notices: Not Supported 00:07:39.110 Controller Attributes 00:07:39.110 128-bit Host Identifier: Not Supported 00:07:39.110 Non-Operational Permissive Mode: Not Supported 00:07:39.110 NVM Sets: Not Supported 00:07:39.110 Read Recovery Levels: Not Supported 00:07:39.110 Endurance Groups: Supported 00:07:39.110 Predictable Latency Mode: Not Supported 00:07:39.110 Traffic Based Keep ALive: Not Supported 00:07:39.110 Namespace Granularity: Not Supported 00:07:39.110 SQ Associations: Not Supported 00:07:39.110 UUID List: Not Supported 00:07:39.110 Multi-Domain Subsystem: Not Supported 00:07:39.110 Fixed Capacity Management: Not Supported 00:07:39.110 Variable Capacity Management: Not Supported 00:07:39.110 Delete Endurance Group: Not Supported 00:07:39.110 Delete NVM Set: Not Supported 00:07:39.110 Extended LBA Formats Supported: Supported 00:07:39.110 Flexible Data Placement Supported: Supported 00:07:39.110 00:07:39.110 Controller Memory Buffer Support 00:07:39.110 ================================ 00:07:39.110 Supported: No 00:07:39.110 00:07:39.110 Persistent Memory Region Support 00:07:39.110 ================================ 00:07:39.110 Supported: No 00:07:39.110 00:07:39.110 Admin Command Set Attributes 00:07:39.110 ============================ 00:07:39.110 Security Send/Receive: Not Supported 00:07:39.110 Format NVM: Supported 00:07:39.110 Firmware Activate/Download: Not Supported 00:07:39.110 Namespace Management: Supported 00:07:39.110 Device Self-Test: Not Supported 00:07:39.110 Directives: Supported 00:07:39.110 NVMe-MI: Not Supported 00:07:39.110 Virtualization Management: Not Supported 00:07:39.110 Doorbell Buffer Config: Supported 00:07:39.110 Get LBA Status Capability: Not Supported 00:07:39.110 Command & Feature Lockdown Capability: Not Supported 00:07:39.110 Abort Command Limit: 4 00:07:39.110 Async Event Request Limit: 4 00:07:39.110 Number of Firmware Slots: N/A 00:07:39.110 Firmware Slot 1 Read-Only: N/A 00:07:39.110 Firmware Activation Without Reset: N/A 00:07:39.110 Multiple Update Detection Support: N/A 00:07:39.110 Firmware Update Granularity: No Information Provided 00:07:39.110 Per-Namespace SMART Log: Yes 00:07:39.110 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.110 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:39.110 Command Effects Log Page: Supported 00:07:39.110 Get Log Page Extended Data: Supported 00:07:39.110 Telemetry Log Pages: Not Supported 00:07:39.110 Persistent Event Log Pages: Not Supported 00:07:39.110 Supported Log Pages Log Page: May Support 00:07:39.110 Commands Supported & Effects Log Page: Not Supported 00:07:39.110 Feature Identifiers & Effects Log Page:May Support 00:07:39.110 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.110 Data Area 4 for Telemetry Log: Not Supported 00:07:39.110 Error Log Page Entries Supported: 1 00:07:39.110 Keep Alive: Not Supported 00:07:39.110 00:07:39.110 NVM Command Set Attributes 00:07:39.110 ========================== 00:07:39.110 Submission Queue Entry Size 00:07:39.110 Max: 64 00:07:39.110 Min: 64 00:07:39.110 Completion Queue Entry Size 00:07:39.110 Max: 16 00:07:39.110 Min: 16 00:07:39.110 Number of Namespaces: 256 00:07:39.110 Compare Command: Supported 00:07:39.110 Write Uncorrectable Command: Not Supported 00:07:39.110 Dataset Management Command: Supported 00:07:39.110 Write Zeroes Command: Supported 00:07:39.110 Set Features Save Field: Supported 00:07:39.110 Reservations: Not Supported 00:07:39.110 Timestamp: Supported 00:07:39.110 Copy: Supported 00:07:39.110 Volatile Write Cache: Present 00:07:39.110 Atomic Write Unit (Normal): 1 00:07:39.110 Atomic Write Unit (PFail): 1 00:07:39.110 Atomic Compare & Write Unit: 1 00:07:39.110 Fused Compare & Write: Not Supported 00:07:39.110 Scatter-Gather List 00:07:39.110 SGL Command Set: Supported 00:07:39.110 SGL Keyed: Not Supported 00:07:39.110 SGL Bit Bucket Descriptor: Not Supported 00:07:39.110 SGL Metadata Pointer: Not Supported 00:07:39.110 Oversized SGL: Not Supported 00:07:39.110 SGL Metadata Address: Not Supported 00:07:39.110 SGL Offset: Not Supported 00:07:39.110 Transport SGL Data Block: Not Supported 00:07:39.110 Replay Protected Memory Block: Not Supported 00:07:39.110 00:07:39.110 Firmware Slot Information 00:07:39.110 ========================= 00:07:39.110 Active slot: 1 00:07:39.110 Slot 1 Firmware Revision: 1.0 00:07:39.110 00:07:39.110 00:07:39.110 Commands Supported and Effects 00:07:39.110 ============================== 00:07:39.110 Admin Commands 00:07:39.110 -------------- 00:07:39.110 Delete I/O Submission Queue (00h): Supported 00:07:39.110 Create I/O Submission Queue (01h): Supported 00:07:39.110 Get Log Page (02h): Supported 00:07:39.110 Delete I/O Completion Queue (04h): Supported 00:07:39.110 Create I/O Completion Queue (05h): Supported 00:07:39.110 Identify (06h): Supported 00:07:39.110 Abort (08h): Supported 00:07:39.110 Set Features (09h): Supported 00:07:39.110 Get Features (0Ah): Supported 00:07:39.110 Asynchronous Event Request (0Ch): Supported 00:07:39.110 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.110 Directive Send (19h): Supported 00:07:39.110 Directive Receive (1Ah): Supported 00:07:39.110 Virtualization Management (1Ch): Supported 00:07:39.110 Doorbell Buffer Config (7Ch): Supported 00:07:39.110 Format NVM (80h): Supported LBA-Change 00:07:39.110 I/O Commands 00:07:39.110 ------------ 00:07:39.110 Flush (00h): Supported LBA-Change 00:07:39.110 Write (01h): Supported LBA-Change 00:07:39.110 Read (02h): Supported 00:07:39.110 Compare (05h): Supported 00:07:39.110 Write Zeroes (08h): Supported LBA-Change 00:07:39.110 Dataset Management (09h): Supported LBA-Change 00:07:39.110 Unknown (0Ch): Supported 00:07:39.110 Unknown (12h): Supported 00:07:39.110 Copy (19h): Supported LBA-Change 00:07:39.110 Unknown (1Dh): Supported LBA-Change 00:07:39.110 00:07:39.110 Error Log 00:07:39.110 ========= 00:07:39.110 00:07:39.110 Arbitration 00:07:39.110 =========== 00:07:39.110 Arbitration Burst: no limit 00:07:39.110 00:07:39.110 Power Management 00:07:39.110 ================ 00:07:39.110 Number of Power States: 1 00:07:39.110 Current Power State: Power State #0 00:07:39.110 Power State #0: 00:07:39.110 Max Power: 25.00 W 00:07:39.110 Non-Operational State: Operational 00:07:39.110 Entry Latency: 16 microseconds 00:07:39.110 Exit Latency: 4 microseconds 00:07:39.111 Relative Read Throughput: 0 00:07:39.111 Relative Read Latency: 0 00:07:39.111 Relative Write Throughput: 0 00:07:39.111 Relative Write Latency: 0 00:07:39.111 Idle Power: Not Reported 00:07:39.111 Active Power: Not Reported 00:07:39.111 Non-Operational Permissive Mode: Not Supported 00:07:39.111 00:07:39.111 Health Information 00:07:39.111 ================== 00:07:39.111 Critical Warnings: 00:07:39.111 Available Spare Space: OK 00:07:39.111 Temperature: OK 00:07:39.111 Device Reliability: OK 00:07:39.111 Read Only: No 00:07:39.111 Volatile Memory Backup: OK 00:07:39.111 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.111 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.111 Available Spare: 0% 00:07:39.111 Available Spare Threshold: 0% 00:07:39.111 Life Percentage Used: 0% 00:07:39.111 Data Units Read: 908 00:07:39.111 Data Units Written: 837 00:07:39.111 Host Read Commands: 39093 00:07:39.111 Host Write Commands: 38516 00:07:39.111 Controller Busy Time: 0 minutes 00:07:39.111 Power Cycles: 0 00:07:39.111 Power On Hours: 0 hours 00:07:39.111 Unsafe Shutdowns: 0 00:07:39.111 Unrecoverable Media Errors: 0 00:07:39.111 Lifetime Error Log Entries: 0 00:07:39.111 Warning Temperature Time: 0 minutes 00:07:39.111 Critical Temperature Time: 0 minutes 00:07:39.111 00:07:39.111 Number of Queues 00:07:39.111 ================ 00:07:39.111 Number of I/O Submission Queues: 64 00:07:39.111 Number of I/O Completion Queues: 64 00:07:39.111 00:07:39.111 ZNS Specific Controller Data 00:07:39.111 ============================ 00:07:39.111 Zone Append Size Limit: 0 00:07:39.111 00:07:39.111 00:07:39.111 Active Namespaces 00:07:39.111 ================= 00:07:39.111 Namespace ID:1 00:07:39.111 Error Recovery Timeout: Unlimited 00:07:39.111 Command Set Identifier: NVM (00h) 00:07:39.111 Deallocate: Supported 00:07:39.111 Deallocated/Unwritten Error: Supported 00:07:39.111 Deallocated Read Value: All 0x00 00:07:39.111 Deallocate in Write Zeroes: Not Supported 00:07:39.111 Deallocated Guard Field: 0xFFFF 00:07:39.111 Flush: Supported 00:07:39.111 Reservation: Not Supported 00:07:39.111 Namespace Sharing Capabilities: Multiple Controllers 00:07:39.111 Size (in LBAs): 262144 (1GiB) 00:07:39.111 Capacity (in LBAs): 262144 (1GiB) 00:07:39.111 Utilization (in LBAs): 262144 (1GiB) 00:07:39.111 Thin Provisioning: Not Supported 00:07:39.111 Per-NS Atomic Units: No 00:07:39.111 Maximum Single Source Range Length: 128 00:07:39.111 Maximum Copy Length: 128 00:07:39.111 Maximum Source Range Count: 128 00:07:39.111 NGUID/EUI64 Never Reused: No 00:07:39.111 Namespace Write Protected: No 00:07:39.111 Endurance group ID: 1 00:07:39.111 Number of LBA Formats: 8 00:07:39.111 Current LBA Format: LBA Format #04 00:07:39.111 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.111 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.111 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.111 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.111 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.111 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.111 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.111 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.111 00:07:39.111 Get Feature FDP: 00:07:39.111 ================ 00:07:39.111 Enabled: Yes 00:07:39.111 FDP configuration index: 0 00:07:39.111 00:07:39.111 FDP configurations log page 00:07:39.111 =========================== 00:07:39.111 Number of FDP configurations: 1 00:07:39.111 Version: 0 00:07:39.111 Size: 112 00:07:39.111 FDP Configuration Descriptor: 0 00:07:39.111 Descriptor Size: 96 00:07:39.111 Reclaim Group Identifier format: 2 00:07:39.111 FDP Volatile Write Cache: Not Present 00:07:39.111 FDP Configuration: Valid 00:07:39.111 Vendor Specific Size: 0 00:07:39.111 Number of Reclaim Groups: 2 00:07:39.111 Number of Recalim Unit Handles: 8 00:07:39.111 Max Placement Identifiers: 128 00:07:39.111 Number of Namespaces Suppprted: 256 00:07:39.111 Reclaim unit Nominal Size: 6000000 bytes 00:07:39.111 Estimated Reclaim Unit Time Limit: Not Reported 00:07:39.111 RUH Desc #000: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #001: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #002: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #003: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #004: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #005: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #006: RUH Type: Initially Isolated 00:07:39.111 RUH Desc #007: RUH Type: Initially Isolated 00:07:39.111 00:07:39.111 FDP reclaim unit handle usage log page 00:07:39.369 ====================================== 00:07:39.369 Number of Reclaim Unit Handles: 8 00:07:39.369 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:39.369 RUH Usage Desc #001: RUH Attributes: Unused 00:07:39.369 RUH Usage Desc #002: RUH Attributes: Unused 00:07:39.369 RUH Usage Desc #003: RUH Attributes: Unused 00:07:39.369 RUH Usage Desc #004: RUH Attributes: Unused 00:07:39.369 RUH Usage Desc #005: RUH Attributes: Unused 00:07:39.369 RUH Usage Desc #006: RUH Attributes: Unused 00:07:39.369 RUH Usage Desc #007: RUH Attributes: Unused 00:07:39.369 00:07:39.369 FDP statistics log page 00:07:39.369 ======================= 00:07:39.369 Host bytes with metadata written: 524656640 00:07:39.369 Media bytes with metadata written: 524705792 00:07:39.369 Media bytes erased: 0 00:07:39.369 00:07:39.369 FDP events log page 00:07:39.369 =================== 00:07:39.369 Number of FDP events: 0 00:07:39.369 00:07:39.369 NVM Specific Namespace Data 00:07:39.369 =========================== 00:07:39.369 Logical Block Storage Tag Mask: 0 00:07:39.369 Protection Information Capabilities: 00:07:39.369 16b Guard Protection Information Storage Tag Support: No 00:07:39.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.369 Storage Tag Check Read Support: No 00:07:39.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.369 ************************************ 00:07:39.369 END TEST nvme_identify 00:07:39.369 ************************************ 00:07:39.369 00:07:39.369 real 0m1.166s 00:07:39.369 user 0m0.445s 00:07:39.369 sys 0m0.511s 00:07:39.369 17:48:57 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.369 17:48:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 17:48:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:39.369 17:48:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.369 17:48:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.369 17:48:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 ************************************ 00:07:39.369 START TEST nvme_perf 00:07:39.369 ************************************ 00:07:39.369 17:48:57 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:39.369 17:48:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:40.745 Initializing NVMe Controllers 00:07:40.745 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.745 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.745 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.745 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.745 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:40.745 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:40.745 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:40.745 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:40.745 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:40.745 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:40.745 Initialization complete. Launching workers. 00:07:40.745 ======================================================== 00:07:40.745 Latency(us) 00:07:40.745 Device Information : IOPS MiB/s Average min max 00:07:40.745 PCIE (0000:00:10.0) NSID 1 from core 0: 18835.72 220.73 6804.10 5607.03 27480.22 00:07:40.745 PCIE (0000:00:11.0) NSID 1 from core 0: 18835.72 220.73 6794.98 5709.05 25877.38 00:07:40.745 PCIE (0000:00:13.0) NSID 1 from core 0: 18835.72 220.73 6784.84 5668.76 24616.66 00:07:40.745 PCIE (0000:00:12.0) NSID 1 from core 0: 18835.72 220.73 6774.54 5666.97 22960.97 00:07:40.745 PCIE (0000:00:12.0) NSID 2 from core 0: 18835.72 220.73 6763.85 5668.96 21312.99 00:07:40.745 PCIE (0000:00:12.0) NSID 3 from core 0: 18835.72 220.73 6753.46 5698.13 19582.34 00:07:40.745 ======================================================== 00:07:40.745 Total : 113014.30 1324.39 6779.30 5607.03 27480.22 00:07:40.745 00:07:40.745 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.745 ================================================================================= 00:07:40.745 1.00000% : 5721.797us 00:07:40.745 10.00000% : 5873.034us 00:07:40.745 25.00000% : 6074.683us 00:07:40.745 50.00000% : 6402.363us 00:07:40.745 75.00000% : 6704.837us 00:07:40.745 90.00000% : 8116.382us 00:07:40.745 95.00000% : 9326.277us 00:07:40.745 98.00000% : 11040.295us 00:07:40.745 99.00000% : 14417.920us 00:07:40.745 99.50000% : 21273.994us 00:07:40.745 99.90000% : 27020.997us 00:07:40.745 99.99000% : 27625.945us 00:07:40.745 99.99900% : 27625.945us 00:07:40.745 99.99990% : 27625.945us 00:07:40.745 99.99999% : 27625.945us 00:07:40.745 00:07:40.745 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.745 ================================================================================= 00:07:40.745 1.00000% : 5797.415us 00:07:40.745 10.00000% : 5923.446us 00:07:40.745 25.00000% : 6099.889us 00:07:40.745 50.00000% : 6377.157us 00:07:40.745 75.00000% : 6654.425us 00:07:40.745 90.00000% : 8166.794us 00:07:40.745 95.00000% : 9175.040us 00:07:40.745 98.00000% : 11443.594us 00:07:40.745 99.00000% : 14216.271us 00:07:40.745 99.50000% : 20064.098us 00:07:40.745 99.90000% : 25508.628us 00:07:40.745 99.99000% : 26012.751us 00:07:40.745 99.99900% : 26012.751us 00:07:40.745 99.99990% : 26012.751us 00:07:40.745 99.99999% : 26012.751us 00:07:40.745 00:07:40.745 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.745 ================================================================================= 00:07:40.745 1.00000% : 5797.415us 00:07:40.745 10.00000% : 5923.446us 00:07:40.745 25.00000% : 6099.889us 00:07:40.745 50.00000% : 6377.157us 00:07:40.745 75.00000% : 6654.425us 00:07:40.745 90.00000% : 8116.382us 00:07:40.745 95.00000% : 9225.452us 00:07:40.745 98.00000% : 11443.594us 00:07:40.745 99.00000% : 14115.446us 00:07:40.745 99.50000% : 18753.378us 00:07:40.745 99.90000% : 24197.908us 00:07:40.745 99.99000% : 24601.206us 00:07:40.745 99.99900% : 24702.031us 00:07:40.745 99.99990% : 24702.031us 00:07:40.745 99.99999% : 24702.031us 00:07:40.745 00:07:40.745 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.745 ================================================================================= 00:07:40.745 1.00000% : 5797.415us 00:07:40.745 10.00000% : 5923.446us 00:07:40.745 25.00000% : 6099.889us 00:07:40.745 50.00000% : 6377.157us 00:07:40.745 75.00000% : 6654.425us 00:07:40.745 90.00000% : 8166.794us 00:07:40.745 95.00000% : 9376.689us 00:07:40.745 98.00000% : 11544.418us 00:07:40.745 99.00000% : 14317.095us 00:07:40.745 99.50000% : 17140.185us 00:07:40.745 99.90000% : 22584.714us 00:07:40.745 99.99000% : 22988.012us 00:07:40.745 99.99900% : 22988.012us 00:07:40.745 99.99990% : 22988.012us 00:07:40.745 99.99999% : 22988.012us 00:07:40.745 00:07:40.745 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.745 ================================================================================= 00:07:40.745 1.00000% : 5797.415us 00:07:40.745 10.00000% : 5923.446us 00:07:40.746 25.00000% : 6099.889us 00:07:40.746 50.00000% : 6377.157us 00:07:40.746 75.00000% : 6654.425us 00:07:40.746 90.00000% : 8116.382us 00:07:40.746 95.00000% : 9527.926us 00:07:40.746 98.00000% : 11494.006us 00:07:40.746 99.00000% : 14619.569us 00:07:40.746 99.50000% : 15728.640us 00:07:40.746 99.90000% : 20870.695us 00:07:40.746 99.99000% : 21374.818us 00:07:40.746 99.99900% : 21374.818us 00:07:40.746 99.99990% : 21374.818us 00:07:40.746 99.99999% : 21374.818us 00:07:40.746 00:07:40.746 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.746 ================================================================================= 00:07:40.746 1.00000% : 5797.415us 00:07:40.746 10.00000% : 5923.446us 00:07:40.746 25.00000% : 6099.889us 00:07:40.746 50.00000% : 6377.157us 00:07:40.746 75.00000% : 6654.425us 00:07:40.746 90.00000% : 8116.382us 00:07:40.746 95.00000% : 9477.514us 00:07:40.746 98.00000% : 11141.120us 00:07:40.746 99.00000% : 13812.972us 00:07:40.746 99.50000% : 15022.868us 00:07:40.746 99.90000% : 19156.677us 00:07:40.746 99.99000% : 19559.975us 00:07:40.746 99.99900% : 19660.800us 00:07:40.746 99.99990% : 19660.800us 00:07:40.746 99.99999% : 19660.800us 00:07:40.746 00:07:40.746 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.746 ============================================================================== 00:07:40.746 Range in us Cumulative IO count 00:07:40.746 5595.766 - 5620.972: 0.0106% ( 2) 00:07:40.746 5620.972 - 5646.178: 0.0794% ( 13) 00:07:40.746 5646.178 - 5671.385: 0.2966% ( 41) 00:07:40.746 5671.385 - 5696.591: 0.7468% ( 85) 00:07:40.746 5696.591 - 5721.797: 1.7426% ( 188) 00:07:40.746 5721.797 - 5747.003: 2.9979% ( 237) 00:07:40.746 5747.003 - 5772.209: 4.5233% ( 288) 00:07:40.746 5772.209 - 5797.415: 6.2500% ( 326) 00:07:40.746 5797.415 - 5822.622: 7.9714% ( 325) 00:07:40.746 5822.622 - 5847.828: 9.6133% ( 310) 00:07:40.746 5847.828 - 5873.034: 11.1070% ( 282) 00:07:40.746 5873.034 - 5898.240: 12.7860% ( 317) 00:07:40.746 5898.240 - 5923.446: 14.5074% ( 325) 00:07:40.746 5923.446 - 5948.652: 16.3083% ( 340) 00:07:40.746 5948.652 - 5973.858: 18.1409% ( 346) 00:07:40.746 5973.858 - 5999.065: 19.9153% ( 335) 00:07:40.746 5999.065 - 6024.271: 21.7214% ( 341) 00:07:40.746 6024.271 - 6049.477: 23.7924% ( 391) 00:07:40.746 6049.477 - 6074.683: 25.5879% ( 339) 00:07:40.746 6074.683 - 6099.889: 27.6430% ( 388) 00:07:40.746 6099.889 - 6125.095: 29.5286% ( 356) 00:07:40.746 6125.095 - 6150.302: 31.5466% ( 381) 00:07:40.746 6150.302 - 6175.508: 33.4163% ( 353) 00:07:40.746 6175.508 - 6200.714: 35.4396% ( 382) 00:07:40.746 6200.714 - 6225.920: 37.4788% ( 385) 00:07:40.746 6225.920 - 6251.126: 39.5180% ( 385) 00:07:40.746 6251.126 - 6276.332: 41.4301% ( 361) 00:07:40.746 6276.332 - 6301.538: 43.3581% ( 364) 00:07:40.746 6301.538 - 6326.745: 45.4025% ( 386) 00:07:40.746 6326.745 - 6351.951: 47.4047% ( 378) 00:07:40.746 6351.951 - 6377.157: 49.4015% ( 377) 00:07:40.746 6377.157 - 6402.363: 51.3877% ( 375) 00:07:40.746 6402.363 - 6427.569: 53.3951% ( 379) 00:07:40.746 6427.569 - 6452.775: 55.4873% ( 395) 00:07:40.746 6452.775 - 6503.188: 59.5074% ( 759) 00:07:40.746 6503.188 - 6553.600: 63.5540% ( 764) 00:07:40.746 6553.600 - 6604.012: 67.6165% ( 767) 00:07:40.746 6604.012 - 6654.425: 71.6790% ( 767) 00:07:40.746 6654.425 - 6704.837: 75.2013% ( 665) 00:07:40.746 6704.837 - 6755.249: 77.5583% ( 445) 00:07:40.746 6755.249 - 6805.662: 79.2744% ( 324) 00:07:40.746 6805.662 - 6856.074: 80.5456% ( 240) 00:07:40.746 6856.074 - 6906.486: 81.4672% ( 174) 00:07:40.746 6906.486 - 6956.898: 82.2299% ( 144) 00:07:40.746 6956.898 - 7007.311: 82.9290% ( 132) 00:07:40.746 7007.311 - 7057.723: 83.5169% ( 111) 00:07:40.746 7057.723 - 7108.135: 84.0148% ( 94) 00:07:40.746 7108.135 - 7158.548: 84.4492% ( 82) 00:07:40.746 7158.548 - 7208.960: 84.8146% ( 69) 00:07:40.746 7208.960 - 7259.372: 85.0953% ( 53) 00:07:40.746 7259.372 - 7309.785: 85.4343% ( 64) 00:07:40.746 7309.785 - 7360.197: 85.7203% ( 54) 00:07:40.746 7360.197 - 7410.609: 86.0064% ( 54) 00:07:40.746 7410.609 - 7461.022: 86.2977% ( 55) 00:07:40.746 7461.022 - 7511.434: 86.6102% ( 59) 00:07:40.746 7511.434 - 7561.846: 86.9068% ( 56) 00:07:40.746 7561.846 - 7612.258: 87.1928% ( 54) 00:07:40.746 7612.258 - 7662.671: 87.4947% ( 57) 00:07:40.746 7662.671 - 7713.083: 87.7913% ( 56) 00:07:40.746 7713.083 - 7763.495: 88.1674% ( 71) 00:07:40.746 7763.495 - 7813.908: 88.4640% ( 56) 00:07:40.746 7813.908 - 7864.320: 88.7288% ( 50) 00:07:40.746 7864.320 - 7914.732: 88.9989% ( 51) 00:07:40.746 7914.732 - 7965.145: 89.2638% ( 50) 00:07:40.746 7965.145 - 8015.557: 89.5339% ( 51) 00:07:40.746 8015.557 - 8065.969: 89.7405% ( 39) 00:07:40.746 8065.969 - 8116.382: 90.0000% ( 49) 00:07:40.746 8116.382 - 8166.794: 90.2701% ( 51) 00:07:40.746 8166.794 - 8217.206: 90.5138% ( 46) 00:07:40.746 8217.206 - 8267.618: 90.7627% ( 47) 00:07:40.746 8267.618 - 8318.031: 91.0699% ( 58) 00:07:40.746 8318.031 - 8368.443: 91.2871% ( 41) 00:07:40.746 8368.443 - 8418.855: 91.5201% ( 44) 00:07:40.746 8418.855 - 8469.268: 91.7956% ( 52) 00:07:40.746 8469.268 - 8519.680: 92.0392% ( 46) 00:07:40.746 8519.680 - 8570.092: 92.3146% ( 52) 00:07:40.746 8570.092 - 8620.505: 92.5636% ( 47) 00:07:40.746 8620.505 - 8670.917: 92.8390% ( 52) 00:07:40.746 8670.917 - 8721.329: 93.0720% ( 44) 00:07:40.746 8721.329 - 8771.742: 93.2521% ( 34) 00:07:40.746 8771.742 - 8822.154: 93.4587% ( 39) 00:07:40.746 8822.154 - 8872.566: 93.6600% ( 38) 00:07:40.746 8872.566 - 8922.978: 93.8453% ( 35) 00:07:40.746 8922.978 - 8973.391: 94.0360% ( 36) 00:07:40.746 8973.391 - 9023.803: 94.2161% ( 34) 00:07:40.746 9023.803 - 9074.215: 94.4174% ( 38) 00:07:40.746 9074.215 - 9124.628: 94.5763% ( 30) 00:07:40.746 9124.628 - 9175.040: 94.6981% ( 23) 00:07:40.746 9175.040 - 9225.452: 94.8517% ( 29) 00:07:40.746 9225.452 - 9275.865: 94.9576% ( 20) 00:07:40.746 9275.865 - 9326.277: 95.0794% ( 23) 00:07:40.746 9326.277 - 9376.689: 95.1642% ( 16) 00:07:40.746 9376.689 - 9427.102: 95.2913% ( 24) 00:07:40.746 9427.102 - 9477.514: 95.4025% ( 21) 00:07:40.746 9477.514 - 9527.926: 95.4767% ( 14) 00:07:40.746 9527.926 - 9578.338: 95.5508% ( 14) 00:07:40.746 9578.338 - 9628.751: 95.6197% ( 13) 00:07:40.746 9628.751 - 9679.163: 95.6992% ( 15) 00:07:40.746 9679.163 - 9729.575: 95.7998% ( 19) 00:07:40.746 9729.575 - 9779.988: 95.9110% ( 21) 00:07:40.746 9779.988 - 9830.400: 96.0064% ( 18) 00:07:40.746 9830.400 - 9880.812: 96.0964% ( 17) 00:07:40.746 9880.812 - 9931.225: 96.2288% ( 25) 00:07:40.746 9931.225 - 9981.637: 96.3665% ( 26) 00:07:40.746 9981.637 - 10032.049: 96.4725% ( 20) 00:07:40.746 10032.049 - 10082.462: 96.5837% ( 21) 00:07:40.746 10082.462 - 10132.874: 96.7055% ( 23) 00:07:40.746 10132.874 - 10183.286: 96.8008% ( 18) 00:07:40.746 10183.286 - 10233.698: 96.8856% ( 16) 00:07:40.746 10233.698 - 10284.111: 96.9862% ( 19) 00:07:40.746 10284.111 - 10334.523: 97.0710% ( 16) 00:07:40.746 10334.523 - 10384.935: 97.1557% ( 16) 00:07:40.746 10384.935 - 10435.348: 97.2246% ( 13) 00:07:40.746 10435.348 - 10485.760: 97.3305% ( 20) 00:07:40.746 10485.760 - 10536.172: 97.4206% ( 17) 00:07:40.746 10536.172 - 10586.585: 97.5159% ( 18) 00:07:40.746 10586.585 - 10636.997: 97.5847% ( 13) 00:07:40.746 10636.997 - 10687.409: 97.6642% ( 15) 00:07:40.746 10687.409 - 10737.822: 97.7278% ( 12) 00:07:40.746 10737.822 - 10788.234: 97.8072% ( 15) 00:07:40.746 10788.234 - 10838.646: 97.8602% ( 10) 00:07:40.746 10838.646 - 10889.058: 97.9078% ( 9) 00:07:40.746 10889.058 - 10939.471: 97.9449% ( 7) 00:07:40.746 10939.471 - 10989.883: 97.9608% ( 3) 00:07:40.746 10989.883 - 11040.295: 98.0032% ( 8) 00:07:40.746 11040.295 - 11090.708: 98.0138% ( 2) 00:07:40.746 11090.708 - 11141.120: 98.0350% ( 4) 00:07:40.746 11141.120 - 11191.532: 98.0561% ( 4) 00:07:40.746 11191.532 - 11241.945: 98.0667% ( 2) 00:07:40.746 11241.945 - 11292.357: 98.0932% ( 5) 00:07:40.746 11292.357 - 11342.769: 98.1038% ( 2) 00:07:40.746 11342.769 - 11393.182: 98.1197% ( 3) 00:07:40.746 11393.182 - 11443.594: 98.1515% ( 6) 00:07:40.746 11443.594 - 11494.006: 98.1621% ( 2) 00:07:40.746 11494.006 - 11544.418: 98.1727% ( 2) 00:07:40.746 11544.418 - 11594.831: 98.1886% ( 3) 00:07:40.746 11594.831 - 11645.243: 98.2150% ( 5) 00:07:40.746 11645.243 - 11695.655: 98.2362% ( 4) 00:07:40.746 11695.655 - 11746.068: 98.2627% ( 5) 00:07:40.746 11746.068 - 11796.480: 98.2892% ( 5) 00:07:40.746 11796.480 - 11846.892: 98.2998% ( 2) 00:07:40.746 11846.892 - 11897.305: 98.3210% ( 4) 00:07:40.746 11897.305 - 11947.717: 98.3528% ( 6) 00:07:40.746 11947.717 - 11998.129: 98.3792% ( 5) 00:07:40.746 11998.129 - 12048.542: 98.4004% ( 4) 00:07:40.746 12048.542 - 12098.954: 98.4269% ( 5) 00:07:40.746 12098.954 - 12149.366: 98.4534% ( 5) 00:07:40.746 12149.366 - 12199.778: 98.4693% ( 3) 00:07:40.746 12199.778 - 12250.191: 98.4799% ( 2) 00:07:40.746 12250.191 - 12300.603: 98.4958% ( 3) 00:07:40.746 12300.603 - 12351.015: 98.5117% ( 3) 00:07:40.746 12351.015 - 12401.428: 98.5275% ( 3) 00:07:40.746 12401.428 - 12451.840: 98.5434% ( 3) 00:07:40.746 12451.840 - 12502.252: 98.5593% ( 3) 00:07:40.746 12502.252 - 12552.665: 98.5699% ( 2) 00:07:40.746 12552.665 - 12603.077: 98.5858% ( 3) 00:07:40.746 12603.077 - 12653.489: 98.6017% ( 3) 00:07:40.746 12653.489 - 12703.902: 98.6176% ( 3) 00:07:40.746 12703.902 - 12754.314: 98.6282% ( 2) 00:07:40.746 12754.314 - 12804.726: 98.6441% ( 3) 00:07:40.746 12855.138 - 12905.551: 98.6494% ( 1) 00:07:40.746 12905.551 - 13006.375: 98.6758% ( 5) 00:07:40.746 13107.200 - 13208.025: 98.6970% ( 4) 00:07:40.746 13208.025 - 13308.849: 98.7023% ( 1) 00:07:40.746 13308.849 - 13409.674: 98.7182% ( 3) 00:07:40.746 13409.674 - 13510.498: 98.7288% ( 2) 00:07:40.746 13510.498 - 13611.323: 98.7394% ( 2) 00:07:40.746 13611.323 - 13712.148: 98.7818% ( 8) 00:07:40.747 13712.148 - 13812.972: 98.8189% ( 7) 00:07:40.747 13812.972 - 13913.797: 98.8400% ( 4) 00:07:40.747 13913.797 - 14014.622: 98.8612% ( 4) 00:07:40.747 14014.622 - 14115.446: 98.8877% ( 5) 00:07:40.747 14115.446 - 14216.271: 98.9248% ( 7) 00:07:40.747 14216.271 - 14317.095: 98.9672% ( 8) 00:07:40.747 14317.095 - 14417.920: 99.0042% ( 7) 00:07:40.747 14417.920 - 14518.745: 99.0360% ( 6) 00:07:40.747 14518.745 - 14619.569: 99.0678% ( 6) 00:07:40.747 14619.569 - 14720.394: 99.1049% ( 7) 00:07:40.747 14720.394 - 14821.218: 99.1367% ( 6) 00:07:40.747 14821.218 - 14922.043: 99.1684% ( 6) 00:07:40.747 14922.043 - 15022.868: 99.2002% ( 6) 00:07:40.747 15022.868 - 15123.692: 99.2373% ( 7) 00:07:40.747 15123.692 - 15224.517: 99.2691% ( 6) 00:07:40.747 15224.517 - 15325.342: 99.2956% ( 5) 00:07:40.747 15325.342 - 15426.166: 99.3061% ( 2) 00:07:40.747 15426.166 - 15526.991: 99.3220% ( 3) 00:07:40.747 20265.748 - 20366.572: 99.3326% ( 2) 00:07:40.747 20366.572 - 20467.397: 99.3538% ( 4) 00:07:40.747 20467.397 - 20568.222: 99.3750% ( 4) 00:07:40.747 20568.222 - 20669.046: 99.3909% ( 3) 00:07:40.747 20669.046 - 20769.871: 99.4121% ( 4) 00:07:40.747 20769.871 - 20870.695: 99.4333% ( 4) 00:07:40.747 20870.695 - 20971.520: 99.4492% ( 3) 00:07:40.747 20971.520 - 21072.345: 99.4650% ( 3) 00:07:40.747 21072.345 - 21173.169: 99.4915% ( 5) 00:07:40.747 21173.169 - 21273.994: 99.5074% ( 3) 00:07:40.747 21273.994 - 21374.818: 99.5286% ( 4) 00:07:40.747 21374.818 - 21475.643: 99.5445% ( 3) 00:07:40.747 21475.643 - 21576.468: 99.5657% ( 4) 00:07:40.747 21576.468 - 21677.292: 99.5816% ( 3) 00:07:40.747 21677.292 - 21778.117: 99.5975% ( 3) 00:07:40.747 21778.117 - 21878.942: 99.6186% ( 4) 00:07:40.747 21878.942 - 21979.766: 99.6398% ( 4) 00:07:40.747 21979.766 - 22080.591: 99.6557% ( 3) 00:07:40.747 22080.591 - 22181.415: 99.6610% ( 1) 00:07:40.747 25710.277 - 25811.102: 99.6716% ( 2) 00:07:40.747 25811.102 - 26012.751: 99.7140% ( 8) 00:07:40.747 26012.751 - 26214.400: 99.7511% ( 7) 00:07:40.747 26214.400 - 26416.049: 99.7934% ( 8) 00:07:40.747 26416.049 - 26617.698: 99.8305% ( 7) 00:07:40.747 26617.698 - 26819.348: 99.8676% ( 7) 00:07:40.747 26819.348 - 27020.997: 99.9047% ( 7) 00:07:40.747 27020.997 - 27222.646: 99.9470% ( 8) 00:07:40.747 27222.646 - 27424.295: 99.9841% ( 7) 00:07:40.747 27424.295 - 27625.945: 100.0000% ( 3) 00:07:40.747 00:07:40.747 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.747 ============================================================================== 00:07:40.747 Range in us Cumulative IO count 00:07:40.747 5696.591 - 5721.797: 0.0636% ( 12) 00:07:40.747 5721.797 - 5747.003: 0.2966% ( 44) 00:07:40.747 5747.003 - 5772.209: 0.8633% ( 107) 00:07:40.747 5772.209 - 5797.415: 1.7373% ( 165) 00:07:40.747 5797.415 - 5822.622: 3.1568% ( 268) 00:07:40.747 5822.622 - 5847.828: 4.7669% ( 304) 00:07:40.747 5847.828 - 5873.034: 6.7161% ( 368) 00:07:40.747 5873.034 - 5898.240: 8.6547% ( 366) 00:07:40.747 5898.240 - 5923.446: 10.8739% ( 419) 00:07:40.747 5923.446 - 5948.652: 12.8602% ( 375) 00:07:40.747 5948.652 - 5973.858: 14.6345% ( 335) 00:07:40.747 5973.858 - 5999.065: 16.6843% ( 387) 00:07:40.747 5999.065 - 6024.271: 18.7871% ( 397) 00:07:40.747 6024.271 - 6049.477: 20.8051% ( 381) 00:07:40.747 6049.477 - 6074.683: 22.9025% ( 396) 00:07:40.747 6074.683 - 6099.889: 25.0583% ( 407) 00:07:40.747 6099.889 - 6125.095: 27.2828% ( 420) 00:07:40.747 6125.095 - 6150.302: 29.6028% ( 438) 00:07:40.747 6150.302 - 6175.508: 32.0286% ( 458) 00:07:40.747 6175.508 - 6200.714: 34.3644% ( 441) 00:07:40.747 6200.714 - 6225.920: 36.7373% ( 448) 00:07:40.747 6225.920 - 6251.126: 39.0572% ( 438) 00:07:40.747 6251.126 - 6276.332: 41.4142% ( 445) 00:07:40.747 6276.332 - 6301.538: 43.8294% ( 456) 00:07:40.747 6301.538 - 6326.745: 46.1388% ( 436) 00:07:40.747 6326.745 - 6351.951: 48.5434% ( 454) 00:07:40.747 6351.951 - 6377.157: 50.8686% ( 439) 00:07:40.747 6377.157 - 6402.363: 53.2733% ( 454) 00:07:40.747 6402.363 - 6427.569: 55.6515% ( 449) 00:07:40.747 6427.569 - 6452.775: 58.0667% ( 456) 00:07:40.747 6452.775 - 6503.188: 62.7489% ( 884) 00:07:40.747 6503.188 - 6553.600: 67.3941% ( 877) 00:07:40.747 6553.600 - 6604.012: 71.8697% ( 845) 00:07:40.747 6604.012 - 6654.425: 75.3284% ( 653) 00:07:40.747 6654.425 - 6704.837: 77.6006% ( 429) 00:07:40.747 6704.837 - 6755.249: 79.0996% ( 283) 00:07:40.747 6755.249 - 6805.662: 80.3019% ( 227) 00:07:40.747 6805.662 - 6856.074: 81.2871% ( 186) 00:07:40.747 6856.074 - 6906.486: 82.0498% ( 144) 00:07:40.747 6906.486 - 6956.898: 82.6642% ( 116) 00:07:40.747 6956.898 - 7007.311: 83.1833% ( 98) 00:07:40.747 7007.311 - 7057.723: 83.6706% ( 92) 00:07:40.747 7057.723 - 7108.135: 84.1208% ( 85) 00:07:40.747 7108.135 - 7158.548: 84.4703% ( 66) 00:07:40.747 7158.548 - 7208.960: 84.7828% ( 59) 00:07:40.747 7208.960 - 7259.372: 85.0900% ( 58) 00:07:40.747 7259.372 - 7309.785: 85.3867% ( 56) 00:07:40.747 7309.785 - 7360.197: 85.7203% ( 63) 00:07:40.747 7360.197 - 7410.609: 86.0540% ( 63) 00:07:40.747 7410.609 - 7461.022: 86.3824% ( 62) 00:07:40.747 7461.022 - 7511.434: 86.6896% ( 58) 00:07:40.747 7511.434 - 7561.846: 87.0127% ( 61) 00:07:40.747 7561.846 - 7612.258: 87.2881% ( 52) 00:07:40.747 7612.258 - 7662.671: 87.5424% ( 48) 00:07:40.747 7662.671 - 7713.083: 87.8337% ( 55) 00:07:40.747 7713.083 - 7763.495: 88.1038% ( 51) 00:07:40.747 7763.495 - 7813.908: 88.4163% ( 59) 00:07:40.747 7813.908 - 7864.320: 88.7023% ( 54) 00:07:40.747 7864.320 - 7914.732: 88.9354% ( 44) 00:07:40.747 7914.732 - 7965.145: 89.1631% ( 43) 00:07:40.747 7965.145 - 8015.557: 89.3856% ( 42) 00:07:40.747 8015.557 - 8065.969: 89.6292% ( 46) 00:07:40.747 8065.969 - 8116.382: 89.8888% ( 49) 00:07:40.747 8116.382 - 8166.794: 90.1059% ( 41) 00:07:40.747 8166.794 - 8217.206: 90.3602% ( 48) 00:07:40.747 8217.206 - 8267.618: 90.6250% ( 50) 00:07:40.747 8267.618 - 8318.031: 90.9269% ( 57) 00:07:40.747 8318.031 - 8368.443: 91.2977% ( 70) 00:07:40.747 8368.443 - 8418.855: 91.6049% ( 58) 00:07:40.747 8418.855 - 8469.268: 91.9174% ( 59) 00:07:40.747 8469.268 - 8519.680: 92.2087% ( 55) 00:07:40.747 8519.680 - 8570.092: 92.4735% ( 50) 00:07:40.747 8570.092 - 8620.505: 92.7066% ( 44) 00:07:40.747 8620.505 - 8670.917: 92.9979% ( 55) 00:07:40.747 8670.917 - 8721.329: 93.2362% ( 45) 00:07:40.747 8721.329 - 8771.742: 93.4428% ( 39) 00:07:40.747 8771.742 - 8822.154: 93.6653% ( 42) 00:07:40.747 8822.154 - 8872.566: 93.9195% ( 48) 00:07:40.747 8872.566 - 8922.978: 94.1737% ( 48) 00:07:40.747 8922.978 - 8973.391: 94.3909% ( 41) 00:07:40.747 8973.391 - 9023.803: 94.6028% ( 40) 00:07:40.747 9023.803 - 9074.215: 94.7828% ( 34) 00:07:40.747 9074.215 - 9124.628: 94.9841% ( 38) 00:07:40.747 9124.628 - 9175.040: 95.1589% ( 33) 00:07:40.747 9175.040 - 9225.452: 95.3072% ( 28) 00:07:40.747 9225.452 - 9275.865: 95.4131% ( 20) 00:07:40.747 9275.865 - 9326.277: 95.5667% ( 29) 00:07:40.747 9326.277 - 9376.689: 95.6727% ( 20) 00:07:40.747 9376.689 - 9427.102: 95.7521% ( 15) 00:07:40.747 9427.102 - 9477.514: 95.8263% ( 14) 00:07:40.747 9477.514 - 9527.926: 95.8845% ( 11) 00:07:40.747 9527.926 - 9578.338: 95.9322% ( 9) 00:07:40.747 9578.338 - 9628.751: 95.9746% ( 8) 00:07:40.747 9628.751 - 9679.163: 96.0222% ( 9) 00:07:40.747 9679.163 - 9729.575: 96.0593% ( 7) 00:07:40.747 9729.575 - 9779.988: 96.1017% ( 8) 00:07:40.747 9779.988 - 9830.400: 96.1388% ( 7) 00:07:40.747 9830.400 - 9880.812: 96.2235% ( 16) 00:07:40.747 9880.812 - 9931.225: 96.2924% ( 13) 00:07:40.747 9931.225 - 9981.637: 96.3665% ( 14) 00:07:40.747 9981.637 - 10032.049: 96.4725% ( 20) 00:07:40.747 10032.049 - 10082.462: 96.5466% ( 14) 00:07:40.747 10082.462 - 10132.874: 96.6208% ( 14) 00:07:40.747 10132.874 - 10183.286: 96.7002% ( 15) 00:07:40.747 10183.286 - 10233.698: 96.7585% ( 11) 00:07:40.747 10233.698 - 10284.111: 96.8167% ( 11) 00:07:40.747 10284.111 - 10334.523: 96.9121% ( 18) 00:07:40.747 10334.523 - 10384.935: 96.9968% ( 16) 00:07:40.747 10384.935 - 10435.348: 97.0816% ( 16) 00:07:40.747 10435.348 - 10485.760: 97.1716% ( 17) 00:07:40.747 10485.760 - 10536.172: 97.2617% ( 17) 00:07:40.747 10536.172 - 10586.585: 97.3464% ( 16) 00:07:40.747 10586.585 - 10636.997: 97.4258% ( 15) 00:07:40.747 10636.997 - 10687.409: 97.5000% ( 14) 00:07:40.747 10687.409 - 10737.822: 97.5530% ( 10) 00:07:40.747 10737.822 - 10788.234: 97.6059% ( 10) 00:07:40.747 10788.234 - 10838.646: 97.6377% ( 6) 00:07:40.747 10838.646 - 10889.058: 97.6854% ( 9) 00:07:40.747 10889.058 - 10939.471: 97.7278% ( 8) 00:07:40.747 10939.471 - 10989.883: 97.7701% ( 8) 00:07:40.747 10989.883 - 11040.295: 97.8178% ( 9) 00:07:40.747 11040.295 - 11090.708: 97.8602% ( 8) 00:07:40.747 11090.708 - 11141.120: 97.8972% ( 7) 00:07:40.747 11141.120 - 11191.532: 97.9025% ( 1) 00:07:40.747 11191.532 - 11241.945: 97.9131% ( 2) 00:07:40.747 11241.945 - 11292.357: 97.9290% ( 3) 00:07:40.747 11292.357 - 11342.769: 97.9502% ( 4) 00:07:40.747 11342.769 - 11393.182: 97.9767% ( 5) 00:07:40.747 11393.182 - 11443.594: 98.0191% ( 8) 00:07:40.747 11443.594 - 11494.006: 98.0403% ( 4) 00:07:40.747 11494.006 - 11544.418: 98.0456% ( 1) 00:07:40.747 11544.418 - 11594.831: 98.0508% ( 1) 00:07:40.747 11594.831 - 11645.243: 98.0614% ( 2) 00:07:40.747 11645.243 - 11695.655: 98.0720% ( 2) 00:07:40.747 11695.655 - 11746.068: 98.0879% ( 3) 00:07:40.747 11746.068 - 11796.480: 98.1197% ( 6) 00:07:40.747 11796.480 - 11846.892: 98.1462% ( 5) 00:07:40.747 11846.892 - 11897.305: 98.1727% ( 5) 00:07:40.747 11897.305 - 11947.717: 98.2044% ( 6) 00:07:40.747 11947.717 - 11998.129: 98.2309% ( 5) 00:07:40.747 11998.129 - 12048.542: 98.2574% ( 5) 00:07:40.747 12048.542 - 12098.954: 98.2892% ( 6) 00:07:40.747 12098.954 - 12149.366: 98.3104% ( 4) 00:07:40.747 12149.366 - 12199.778: 98.3369% ( 5) 00:07:40.747 12199.778 - 12250.191: 98.3633% ( 5) 00:07:40.747 12250.191 - 12300.603: 98.3951% ( 6) 00:07:40.747 12300.603 - 12351.015: 98.4216% ( 5) 00:07:40.748 12351.015 - 12401.428: 98.4481% ( 5) 00:07:40.748 12401.428 - 12451.840: 98.4746% ( 5) 00:07:40.748 12451.840 - 12502.252: 98.5064% ( 6) 00:07:40.748 12502.252 - 12552.665: 98.5328% ( 5) 00:07:40.748 12552.665 - 12603.077: 98.5593% ( 5) 00:07:40.748 12603.077 - 12653.489: 98.5858% ( 5) 00:07:40.748 12653.489 - 12703.902: 98.6070% ( 4) 00:07:40.748 12703.902 - 12754.314: 98.6176% ( 2) 00:07:40.748 12754.314 - 12804.726: 98.6282% ( 2) 00:07:40.748 12804.726 - 12855.138: 98.6388% ( 2) 00:07:40.748 12855.138 - 12905.551: 98.6441% ( 1) 00:07:40.748 12905.551 - 13006.375: 98.6600% ( 3) 00:07:40.748 13006.375 - 13107.200: 98.6917% ( 6) 00:07:40.748 13107.200 - 13208.025: 98.7129% ( 4) 00:07:40.748 13208.025 - 13308.849: 98.7394% ( 5) 00:07:40.748 13308.849 - 13409.674: 98.7447% ( 1) 00:07:40.748 13409.674 - 13510.498: 98.7765% ( 6) 00:07:40.748 13510.498 - 13611.323: 98.8083% ( 6) 00:07:40.748 13611.323 - 13712.148: 98.8612% ( 10) 00:07:40.748 13712.148 - 13812.972: 98.8877% ( 5) 00:07:40.748 13812.972 - 13913.797: 98.9142% ( 5) 00:07:40.748 13913.797 - 14014.622: 98.9513% ( 7) 00:07:40.748 14014.622 - 14115.446: 98.9831% ( 6) 00:07:40.748 14115.446 - 14216.271: 99.0201% ( 7) 00:07:40.748 14216.271 - 14317.095: 99.0731% ( 10) 00:07:40.748 14317.095 - 14417.920: 99.1155% ( 8) 00:07:40.748 14417.920 - 14518.745: 99.1472% ( 6) 00:07:40.748 14518.745 - 14619.569: 99.1631% ( 3) 00:07:40.748 14619.569 - 14720.394: 99.1737% ( 2) 00:07:40.748 14720.394 - 14821.218: 99.1896% ( 3) 00:07:40.748 14821.218 - 14922.043: 99.2055% ( 3) 00:07:40.748 14922.043 - 15022.868: 99.2214% ( 3) 00:07:40.748 15022.868 - 15123.692: 99.2373% ( 3) 00:07:40.748 15123.692 - 15224.517: 99.2479% ( 2) 00:07:40.748 15224.517 - 15325.342: 99.2638% ( 3) 00:07:40.748 15325.342 - 15426.166: 99.2797% ( 3) 00:07:40.748 15426.166 - 15526.991: 99.2956% ( 3) 00:07:40.748 15526.991 - 15627.815: 99.3114% ( 3) 00:07:40.748 15627.815 - 15728.640: 99.3220% ( 2) 00:07:40.748 19055.852 - 19156.677: 99.3273% ( 1) 00:07:40.748 19156.677 - 19257.502: 99.3485% ( 4) 00:07:40.748 19257.502 - 19358.326: 99.3697% ( 4) 00:07:40.748 19358.326 - 19459.151: 99.3909% ( 4) 00:07:40.748 19459.151 - 19559.975: 99.4121% ( 4) 00:07:40.748 19559.975 - 19660.800: 99.4333% ( 4) 00:07:40.748 19660.800 - 19761.625: 99.4544% ( 4) 00:07:40.748 19761.625 - 19862.449: 99.4703% ( 3) 00:07:40.748 19862.449 - 19963.274: 99.4915% ( 4) 00:07:40.748 19963.274 - 20064.098: 99.5127% ( 4) 00:07:40.748 20064.098 - 20164.923: 99.5339% ( 4) 00:07:40.748 20164.923 - 20265.748: 99.5551% ( 4) 00:07:40.748 20265.748 - 20366.572: 99.5763% ( 4) 00:07:40.748 20366.572 - 20467.397: 99.5975% ( 4) 00:07:40.748 20467.397 - 20568.222: 99.6186% ( 4) 00:07:40.748 20568.222 - 20669.046: 99.6398% ( 4) 00:07:40.748 20669.046 - 20769.871: 99.6610% ( 4) 00:07:40.748 24197.908 - 24298.732: 99.6716% ( 2) 00:07:40.748 24298.732 - 24399.557: 99.6928% ( 4) 00:07:40.748 24399.557 - 24500.382: 99.7140% ( 4) 00:07:40.748 24500.382 - 24601.206: 99.7352% ( 4) 00:07:40.748 24601.206 - 24702.031: 99.7511% ( 3) 00:07:40.748 24702.031 - 24802.855: 99.7775% ( 5) 00:07:40.748 24802.855 - 24903.680: 99.7987% ( 4) 00:07:40.748 24903.680 - 25004.505: 99.8199% ( 4) 00:07:40.748 25004.505 - 25105.329: 99.8358% ( 3) 00:07:40.748 25105.329 - 25206.154: 99.8623% ( 5) 00:07:40.748 25206.154 - 25306.978: 99.8835% ( 4) 00:07:40.748 25306.978 - 25407.803: 99.8994% ( 3) 00:07:40.748 25407.803 - 25508.628: 99.9258% ( 5) 00:07:40.748 25508.628 - 25609.452: 99.9417% ( 3) 00:07:40.748 25609.452 - 25710.277: 99.9629% ( 4) 00:07:40.748 25710.277 - 25811.102: 99.9841% ( 4) 00:07:40.748 25811.102 - 26012.751: 100.0000% ( 3) 00:07:40.748 00:07:40.748 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.748 ============================================================================== 00:07:40.748 Range in us Cumulative IO count 00:07:40.748 5646.178 - 5671.385: 0.0106% ( 2) 00:07:40.748 5671.385 - 5696.591: 0.0265% ( 3) 00:07:40.748 5696.591 - 5721.797: 0.0794% ( 10) 00:07:40.748 5721.797 - 5747.003: 0.2489% ( 32) 00:07:40.748 5747.003 - 5772.209: 0.7150% ( 88) 00:07:40.748 5772.209 - 5797.415: 1.5201% ( 152) 00:07:40.748 5797.415 - 5822.622: 2.8761% ( 256) 00:07:40.748 5822.622 - 5847.828: 4.5922% ( 324) 00:07:40.748 5847.828 - 5873.034: 6.7638% ( 410) 00:07:40.748 5873.034 - 5898.240: 9.0201% ( 426) 00:07:40.748 5898.240 - 5923.446: 11.2288% ( 417) 00:07:40.748 5923.446 - 5948.652: 13.3157% ( 394) 00:07:40.748 5948.652 - 5973.858: 15.3549% ( 385) 00:07:40.748 5973.858 - 5999.065: 17.2722% ( 362) 00:07:40.748 5999.065 - 6024.271: 19.2956% ( 382) 00:07:40.748 6024.271 - 6049.477: 21.3824% ( 394) 00:07:40.748 6049.477 - 6074.683: 23.5434% ( 408) 00:07:40.748 6074.683 - 6099.889: 25.7150% ( 410) 00:07:40.748 6099.889 - 6125.095: 28.0138% ( 434) 00:07:40.748 6125.095 - 6150.302: 30.2701% ( 426) 00:07:40.748 6150.302 - 6175.508: 32.6536% ( 450) 00:07:40.748 6175.508 - 6200.714: 35.0000% ( 443) 00:07:40.748 6200.714 - 6225.920: 37.2246% ( 420) 00:07:40.748 6225.920 - 6251.126: 39.6186% ( 452) 00:07:40.748 6251.126 - 6276.332: 41.9439% ( 439) 00:07:40.748 6276.332 - 6301.538: 44.2956% ( 444) 00:07:40.748 6301.538 - 6326.745: 46.7002% ( 454) 00:07:40.748 6326.745 - 6351.951: 49.0360% ( 441) 00:07:40.748 6351.951 - 6377.157: 51.3294% ( 433) 00:07:40.748 6377.157 - 6402.363: 53.6653% ( 441) 00:07:40.748 6402.363 - 6427.569: 55.9852% ( 438) 00:07:40.748 6427.569 - 6452.775: 58.3739% ( 451) 00:07:40.748 6452.775 - 6503.188: 63.0985% ( 892) 00:07:40.748 6503.188 - 6553.600: 67.8231% ( 892) 00:07:40.748 6553.600 - 6604.012: 72.1663% ( 820) 00:07:40.748 6604.012 - 6654.425: 75.5508% ( 639) 00:07:40.748 6654.425 - 6704.837: 77.7436% ( 414) 00:07:40.748 6704.837 - 6755.249: 79.1472% ( 265) 00:07:40.748 6755.249 - 6805.662: 80.3019% ( 218) 00:07:40.748 6805.662 - 6856.074: 81.1758% ( 165) 00:07:40.748 6856.074 - 6906.486: 81.9915% ( 154) 00:07:40.748 6906.486 - 6956.898: 82.6483% ( 124) 00:07:40.748 6956.898 - 7007.311: 83.1674% ( 98) 00:07:40.748 7007.311 - 7057.723: 83.6600% ( 93) 00:07:40.748 7057.723 - 7108.135: 84.1102% ( 85) 00:07:40.748 7108.135 - 7158.548: 84.4862% ( 71) 00:07:40.748 7158.548 - 7208.960: 84.7934% ( 58) 00:07:40.748 7208.960 - 7259.372: 85.1271% ( 63) 00:07:40.748 7259.372 - 7309.785: 85.4396% ( 59) 00:07:40.748 7309.785 - 7360.197: 85.7415% ( 57) 00:07:40.748 7360.197 - 7410.609: 86.0169% ( 52) 00:07:40.748 7410.609 - 7461.022: 86.2712% ( 48) 00:07:40.748 7461.022 - 7511.434: 86.5307% ( 49) 00:07:40.748 7511.434 - 7561.846: 86.7903% ( 49) 00:07:40.748 7561.846 - 7612.258: 87.0869% ( 56) 00:07:40.748 7612.258 - 7662.671: 87.3729% ( 54) 00:07:40.748 7662.671 - 7713.083: 87.6324% ( 49) 00:07:40.748 7713.083 - 7763.495: 87.8972% ( 50) 00:07:40.748 7763.495 - 7813.908: 88.1674% ( 51) 00:07:40.748 7813.908 - 7864.320: 88.4481% ( 53) 00:07:40.748 7864.320 - 7914.732: 88.7977% ( 66) 00:07:40.748 7914.732 - 7965.145: 89.1208% ( 61) 00:07:40.748 7965.145 - 8015.557: 89.4174% ( 56) 00:07:40.748 8015.557 - 8065.969: 89.6928% ( 52) 00:07:40.748 8065.969 - 8116.382: 90.0053% ( 59) 00:07:40.748 8116.382 - 8166.794: 90.3072% ( 57) 00:07:40.748 8166.794 - 8217.206: 90.5667% ( 49) 00:07:40.748 8217.206 - 8267.618: 90.8210% ( 48) 00:07:40.748 8267.618 - 8318.031: 91.0911% ( 51) 00:07:40.748 8318.031 - 8368.443: 91.3294% ( 45) 00:07:40.748 8368.443 - 8418.855: 91.5678% ( 45) 00:07:40.748 8418.855 - 8469.268: 91.8538% ( 54) 00:07:40.748 8469.268 - 8519.680: 92.1398% ( 54) 00:07:40.748 8519.680 - 8570.092: 92.4100% ( 51) 00:07:40.748 8570.092 - 8620.505: 92.7066% ( 56) 00:07:40.748 8620.505 - 8670.917: 92.9343% ( 43) 00:07:40.748 8670.917 - 8721.329: 93.1409% ( 39) 00:07:40.748 8721.329 - 8771.742: 93.3475% ( 39) 00:07:40.748 8771.742 - 8822.154: 93.5911% ( 46) 00:07:40.748 8822.154 - 8872.566: 93.7977% ( 39) 00:07:40.748 8872.566 - 8922.978: 94.0095% ( 40) 00:07:40.748 8922.978 - 8973.391: 94.1949% ( 35) 00:07:40.748 8973.391 - 9023.803: 94.3750% ( 34) 00:07:40.748 9023.803 - 9074.215: 94.5551% ( 34) 00:07:40.748 9074.215 - 9124.628: 94.7299% ( 33) 00:07:40.748 9124.628 - 9175.040: 94.8782% ( 28) 00:07:40.748 9175.040 - 9225.452: 95.0477% ( 32) 00:07:40.748 9225.452 - 9275.865: 95.1801% ( 25) 00:07:40.748 9275.865 - 9326.277: 95.3019% ( 23) 00:07:40.748 9326.277 - 9376.689: 95.4502% ( 28) 00:07:40.748 9376.689 - 9427.102: 95.5614% ( 21) 00:07:40.748 9427.102 - 9477.514: 95.6674% ( 20) 00:07:40.748 9477.514 - 9527.926: 95.7839% ( 22) 00:07:40.748 9527.926 - 9578.338: 95.8845% ( 19) 00:07:40.748 9578.338 - 9628.751: 95.9799% ( 18) 00:07:40.748 9628.751 - 9679.163: 96.0540% ( 14) 00:07:40.748 9679.163 - 9729.575: 96.1494% ( 18) 00:07:40.748 9729.575 - 9779.988: 96.2288% ( 15) 00:07:40.748 9779.988 - 9830.400: 96.2924% ( 12) 00:07:40.748 9830.400 - 9880.812: 96.3347% ( 8) 00:07:40.748 9880.812 - 9931.225: 96.4142% ( 15) 00:07:40.748 9931.225 - 9981.637: 96.4778% ( 12) 00:07:40.748 9981.637 - 10032.049: 96.5943% ( 22) 00:07:40.748 10032.049 - 10082.462: 96.7055% ( 21) 00:07:40.748 10082.462 - 10132.874: 96.7850% ( 15) 00:07:40.748 10132.874 - 10183.286: 96.8644% ( 15) 00:07:40.748 10183.286 - 10233.698: 96.9121% ( 9) 00:07:40.748 10233.698 - 10284.111: 96.9597% ( 9) 00:07:40.748 10284.111 - 10334.523: 97.0021% ( 8) 00:07:40.748 10334.523 - 10384.935: 97.0498% ( 9) 00:07:40.748 10384.935 - 10435.348: 97.1028% ( 10) 00:07:40.748 10435.348 - 10485.760: 97.1557% ( 10) 00:07:40.748 10485.760 - 10536.172: 97.2087% ( 10) 00:07:40.748 10536.172 - 10586.585: 97.2511% ( 8) 00:07:40.748 10586.585 - 10636.997: 97.2881% ( 7) 00:07:40.748 10636.997 - 10687.409: 97.3358% ( 9) 00:07:40.748 10687.409 - 10737.822: 97.4047% ( 13) 00:07:40.748 10737.822 - 10788.234: 97.4576% ( 10) 00:07:40.748 10788.234 - 10838.646: 97.5053% ( 9) 00:07:40.748 10838.646 - 10889.058: 97.5477% ( 8) 00:07:40.748 10889.058 - 10939.471: 97.5953% ( 9) 00:07:40.749 10939.471 - 10989.883: 97.6430% ( 9) 00:07:40.749 10989.883 - 11040.295: 97.6960% ( 10) 00:07:40.749 11040.295 - 11090.708: 97.7489% ( 10) 00:07:40.749 11090.708 - 11141.120: 97.7966% ( 9) 00:07:40.749 11141.120 - 11191.532: 97.8390% ( 8) 00:07:40.749 11191.532 - 11241.945: 97.8761% ( 7) 00:07:40.749 11241.945 - 11292.357: 97.9131% ( 7) 00:07:40.749 11292.357 - 11342.769: 97.9449% ( 6) 00:07:40.749 11342.769 - 11393.182: 97.9820% ( 7) 00:07:40.749 11393.182 - 11443.594: 98.0191% ( 7) 00:07:40.749 11443.594 - 11494.006: 98.0508% ( 6) 00:07:40.749 11494.006 - 11544.418: 98.0826% ( 6) 00:07:40.749 11544.418 - 11594.831: 98.1091% ( 5) 00:07:40.749 11594.831 - 11645.243: 98.1409% ( 6) 00:07:40.749 11645.243 - 11695.655: 98.1674% ( 5) 00:07:40.749 11695.655 - 11746.068: 98.1992% ( 6) 00:07:40.749 11746.068 - 11796.480: 98.2362% ( 7) 00:07:40.749 11796.480 - 11846.892: 98.2786% ( 8) 00:07:40.749 11846.892 - 11897.305: 98.3104% ( 6) 00:07:40.749 11897.305 - 11947.717: 98.3369% ( 5) 00:07:40.749 11947.717 - 11998.129: 98.3528% ( 3) 00:07:40.749 11998.129 - 12048.542: 98.3686% ( 3) 00:07:40.749 12048.542 - 12098.954: 98.3898% ( 4) 00:07:40.749 12098.954 - 12149.366: 98.3951% ( 1) 00:07:40.749 12149.366 - 12199.778: 98.4004% ( 1) 00:07:40.749 12199.778 - 12250.191: 98.4057% ( 1) 00:07:40.749 12250.191 - 12300.603: 98.4163% ( 2) 00:07:40.749 12300.603 - 12351.015: 98.4269% ( 2) 00:07:40.749 12351.015 - 12401.428: 98.4428% ( 3) 00:07:40.749 12401.428 - 12451.840: 98.4481% ( 1) 00:07:40.749 12451.840 - 12502.252: 98.4587% ( 2) 00:07:40.749 12502.252 - 12552.665: 98.4746% ( 3) 00:07:40.749 12552.665 - 12603.077: 98.5011% ( 5) 00:07:40.749 12603.077 - 12653.489: 98.5381% ( 7) 00:07:40.749 12653.489 - 12703.902: 98.5540% ( 3) 00:07:40.749 12703.902 - 12754.314: 98.5752% ( 4) 00:07:40.749 12754.314 - 12804.726: 98.5911% ( 3) 00:07:40.749 12804.726 - 12855.138: 98.6070% ( 3) 00:07:40.749 12855.138 - 12905.551: 98.6176% ( 2) 00:07:40.749 12905.551 - 13006.375: 98.6653% ( 9) 00:07:40.749 13006.375 - 13107.200: 98.7076% ( 8) 00:07:40.749 13107.200 - 13208.025: 98.7447% ( 7) 00:07:40.749 13208.025 - 13308.849: 98.7818% ( 7) 00:07:40.749 13308.849 - 13409.674: 98.8242% ( 8) 00:07:40.749 13409.674 - 13510.498: 98.8506% ( 5) 00:07:40.749 13510.498 - 13611.323: 98.8718% ( 4) 00:07:40.749 13611.323 - 13712.148: 98.8877% ( 3) 00:07:40.749 13712.148 - 13812.972: 98.9089% ( 4) 00:07:40.749 13812.972 - 13913.797: 98.9513% ( 8) 00:07:40.749 13913.797 - 14014.622: 98.9831% ( 6) 00:07:40.749 14014.622 - 14115.446: 99.0201% ( 7) 00:07:40.749 14115.446 - 14216.271: 99.0413% ( 4) 00:07:40.749 14216.271 - 14317.095: 99.0572% ( 3) 00:07:40.749 14317.095 - 14417.920: 99.0731% ( 3) 00:07:40.749 14417.920 - 14518.745: 99.0890% ( 3) 00:07:40.749 14518.745 - 14619.569: 99.1049% ( 3) 00:07:40.749 14619.569 - 14720.394: 99.1208% ( 3) 00:07:40.749 14720.394 - 14821.218: 99.1367% ( 3) 00:07:40.749 14821.218 - 14922.043: 99.1472% ( 2) 00:07:40.749 14922.043 - 15022.868: 99.1631% ( 3) 00:07:40.749 15022.868 - 15123.692: 99.1790% ( 3) 00:07:40.749 15123.692 - 15224.517: 99.1949% ( 3) 00:07:40.749 15224.517 - 15325.342: 99.2108% ( 3) 00:07:40.749 15325.342 - 15426.166: 99.2267% ( 3) 00:07:40.749 15426.166 - 15526.991: 99.2426% ( 3) 00:07:40.749 15526.991 - 15627.815: 99.2585% ( 3) 00:07:40.749 15627.815 - 15728.640: 99.2744% ( 3) 00:07:40.749 15728.640 - 15829.465: 99.2850% ( 2) 00:07:40.749 15829.465 - 15930.289: 99.3008% ( 3) 00:07:40.749 15930.289 - 16031.114: 99.3167% ( 3) 00:07:40.749 16031.114 - 16131.938: 99.3220% ( 1) 00:07:40.749 17845.957 - 17946.782: 99.3379% ( 3) 00:07:40.749 17946.782 - 18047.606: 99.3591% ( 4) 00:07:40.749 18047.606 - 18148.431: 99.3750% ( 3) 00:07:40.749 18148.431 - 18249.255: 99.4068% ( 6) 00:07:40.749 18249.255 - 18350.080: 99.4227% ( 3) 00:07:40.749 18350.080 - 18450.905: 99.4439% ( 4) 00:07:40.749 18450.905 - 18551.729: 99.4650% ( 4) 00:07:40.749 18551.729 - 18652.554: 99.4862% ( 4) 00:07:40.749 18652.554 - 18753.378: 99.5021% ( 3) 00:07:40.749 18753.378 - 18854.203: 99.5233% ( 4) 00:07:40.749 18854.203 - 18955.028: 99.5445% ( 4) 00:07:40.749 18955.028 - 19055.852: 99.5657% ( 4) 00:07:40.749 19055.852 - 19156.677: 99.5869% ( 4) 00:07:40.749 19156.677 - 19257.502: 99.6081% ( 4) 00:07:40.749 19257.502 - 19358.326: 99.6292% ( 4) 00:07:40.749 19358.326 - 19459.151: 99.6504% ( 4) 00:07:40.749 19459.151 - 19559.975: 99.6610% ( 2) 00:07:40.749 22887.188 - 22988.012: 99.6716% ( 2) 00:07:40.749 22988.012 - 23088.837: 99.6928% ( 4) 00:07:40.749 23088.837 - 23189.662: 99.7087% ( 3) 00:07:40.749 23189.662 - 23290.486: 99.7299% ( 4) 00:07:40.749 23290.486 - 23391.311: 99.7458% ( 3) 00:07:40.749 23391.311 - 23492.135: 99.7669% ( 4) 00:07:40.749 23492.135 - 23592.960: 99.7881% ( 4) 00:07:40.749 23592.960 - 23693.785: 99.8093% ( 4) 00:07:40.749 23693.785 - 23794.609: 99.8252% ( 3) 00:07:40.749 23794.609 - 23895.434: 99.8464% ( 4) 00:07:40.749 23895.434 - 23996.258: 99.8676% ( 4) 00:07:40.749 23996.258 - 24097.083: 99.8888% ( 4) 00:07:40.749 24097.083 - 24197.908: 99.9100% ( 4) 00:07:40.749 24197.908 - 24298.732: 99.9311% ( 4) 00:07:40.749 24298.732 - 24399.557: 99.9523% ( 4) 00:07:40.749 24399.557 - 24500.382: 99.9735% ( 4) 00:07:40.749 24500.382 - 24601.206: 99.9947% ( 4) 00:07:40.749 24601.206 - 24702.031: 100.0000% ( 1) 00:07:40.749 00:07:40.749 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.749 ============================================================================== 00:07:40.749 Range in us Cumulative IO count 00:07:40.749 5646.178 - 5671.385: 0.0053% ( 1) 00:07:40.749 5671.385 - 5696.591: 0.0477% ( 8) 00:07:40.749 5696.591 - 5721.797: 0.1112% ( 12) 00:07:40.749 5721.797 - 5747.003: 0.2436% ( 25) 00:07:40.749 5747.003 - 5772.209: 0.5508% ( 58) 00:07:40.749 5772.209 - 5797.415: 1.4883% ( 177) 00:07:40.749 5797.415 - 5822.622: 3.0244% ( 290) 00:07:40.749 5822.622 - 5847.828: 4.6345% ( 304) 00:07:40.749 5847.828 - 5873.034: 6.4883% ( 350) 00:07:40.749 5873.034 - 5898.240: 8.4216% ( 365) 00:07:40.749 5898.240 - 5923.446: 10.6197% ( 415) 00:07:40.749 5923.446 - 5948.652: 12.7383% ( 400) 00:07:40.749 5948.652 - 5973.858: 14.8199% ( 393) 00:07:40.749 5973.858 - 5999.065: 16.9280% ( 398) 00:07:40.749 5999.065 - 6024.271: 18.9036% ( 373) 00:07:40.749 6024.271 - 6049.477: 20.9481% ( 386) 00:07:40.749 6049.477 - 6074.683: 23.1038% ( 407) 00:07:40.749 6074.683 - 6099.889: 25.3178% ( 418) 00:07:40.749 6099.889 - 6125.095: 27.5212% ( 416) 00:07:40.749 6125.095 - 6150.302: 29.7934% ( 429) 00:07:40.749 6150.302 - 6175.508: 32.2193% ( 458) 00:07:40.749 6175.508 - 6200.714: 34.6610% ( 461) 00:07:40.749 6200.714 - 6225.920: 37.1504% ( 470) 00:07:40.749 6225.920 - 6251.126: 39.4968% ( 443) 00:07:40.749 6251.126 - 6276.332: 41.8538% ( 445) 00:07:40.749 6276.332 - 6301.538: 44.1472% ( 433) 00:07:40.749 6301.538 - 6326.745: 46.5625% ( 456) 00:07:40.749 6326.745 - 6351.951: 48.9089% ( 443) 00:07:40.749 6351.951 - 6377.157: 51.3189% ( 455) 00:07:40.749 6377.157 - 6402.363: 53.6653% ( 443) 00:07:40.749 6402.363 - 6427.569: 56.1388% ( 467) 00:07:40.749 6427.569 - 6452.775: 58.5593% ( 457) 00:07:40.749 6452.775 - 6503.188: 63.4269% ( 919) 00:07:40.749 6503.188 - 6553.600: 68.1780% ( 897) 00:07:40.749 6553.600 - 6604.012: 72.6430% ( 843) 00:07:40.749 6604.012 - 6654.425: 75.9428% ( 623) 00:07:40.749 6654.425 - 6704.837: 78.1144% ( 410) 00:07:40.749 6704.837 - 6755.249: 79.6239% ( 285) 00:07:40.749 6755.249 - 6805.662: 80.7680% ( 216) 00:07:40.749 6805.662 - 6856.074: 81.5996% ( 157) 00:07:40.749 6856.074 - 6906.486: 82.3729% ( 146) 00:07:40.749 6906.486 - 6956.898: 83.1144% ( 140) 00:07:40.749 6956.898 - 7007.311: 83.6335% ( 98) 00:07:40.749 7007.311 - 7057.723: 84.0042% ( 70) 00:07:40.749 7057.723 - 7108.135: 84.3697% ( 69) 00:07:40.749 7108.135 - 7158.548: 84.7087% ( 64) 00:07:40.749 7158.548 - 7208.960: 85.0689% ( 68) 00:07:40.749 7208.960 - 7259.372: 85.4555% ( 73) 00:07:40.749 7259.372 - 7309.785: 85.7362% ( 53) 00:07:40.749 7309.785 - 7360.197: 86.0328% ( 56) 00:07:40.749 7360.197 - 7410.609: 86.3189% ( 54) 00:07:40.749 7410.609 - 7461.022: 86.6155% ( 56) 00:07:40.749 7461.022 - 7511.434: 86.9015% ( 54) 00:07:40.749 7511.434 - 7561.846: 87.1981% ( 56) 00:07:40.749 7561.846 - 7612.258: 87.4364% ( 45) 00:07:40.749 7612.258 - 7662.671: 87.7913% ( 67) 00:07:40.749 7662.671 - 7713.083: 88.1144% ( 61) 00:07:40.749 7713.083 - 7763.495: 88.3422% ( 43) 00:07:40.749 7763.495 - 7813.908: 88.5646% ( 42) 00:07:40.749 7813.908 - 7864.320: 88.7765% ( 40) 00:07:40.749 7864.320 - 7914.732: 88.9989% ( 42) 00:07:40.749 7914.732 - 7965.145: 89.2161% ( 41) 00:07:40.749 7965.145 - 8015.557: 89.4809% ( 50) 00:07:40.749 8015.557 - 8065.969: 89.7140% ( 44) 00:07:40.749 8065.969 - 8116.382: 89.9206% ( 39) 00:07:40.749 8116.382 - 8166.794: 90.1218% ( 38) 00:07:40.749 8166.794 - 8217.206: 90.4237% ( 57) 00:07:40.749 8217.206 - 8267.618: 90.6886% ( 50) 00:07:40.749 8267.618 - 8318.031: 90.9534% ( 50) 00:07:40.749 8318.031 - 8368.443: 91.2235% ( 51) 00:07:40.749 8368.443 - 8418.855: 91.4936% ( 51) 00:07:40.749 8418.855 - 8469.268: 91.7373% ( 46) 00:07:40.749 8469.268 - 8519.680: 92.0127% ( 52) 00:07:40.749 8519.680 - 8570.092: 92.2458% ( 44) 00:07:40.749 8570.092 - 8620.505: 92.4523% ( 39) 00:07:40.749 8620.505 - 8670.917: 92.6324% ( 34) 00:07:40.749 8670.917 - 8721.329: 92.8284% ( 37) 00:07:40.749 8721.329 - 8771.742: 93.0826% ( 48) 00:07:40.749 8771.742 - 8822.154: 93.2574% ( 33) 00:07:40.749 8822.154 - 8872.566: 93.4269% ( 32) 00:07:40.749 8872.566 - 8922.978: 93.5911% ( 31) 00:07:40.749 8922.978 - 8973.391: 93.7606% ( 32) 00:07:40.749 8973.391 - 9023.803: 93.9883% ( 43) 00:07:40.749 9023.803 - 9074.215: 94.1896% ( 38) 00:07:40.749 9074.215 - 9124.628: 94.3697% ( 34) 00:07:40.749 9124.628 - 9175.040: 94.5286% ( 30) 00:07:40.749 9175.040 - 9225.452: 94.6769% ( 28) 00:07:40.749 9225.452 - 9275.865: 94.8146% ( 26) 00:07:40.750 9275.865 - 9326.277: 94.9417% ( 24) 00:07:40.750 9326.277 - 9376.689: 95.0794% ( 26) 00:07:40.750 9376.689 - 9427.102: 95.3072% ( 43) 00:07:40.750 9427.102 - 9477.514: 95.4290% ( 23) 00:07:40.750 9477.514 - 9527.926: 95.5508% ( 23) 00:07:40.750 9527.926 - 9578.338: 95.6727% ( 23) 00:07:40.750 9578.338 - 9628.751: 95.7998% ( 24) 00:07:40.750 9628.751 - 9679.163: 95.9057% ( 20) 00:07:40.750 9679.163 - 9729.575: 96.0434% ( 26) 00:07:40.750 9729.575 - 9779.988: 96.1600% ( 22) 00:07:40.750 9779.988 - 9830.400: 96.2659% ( 20) 00:07:40.750 9830.400 - 9880.812: 96.3824% ( 22) 00:07:40.750 9880.812 - 9931.225: 96.4619% ( 15) 00:07:40.750 9931.225 - 9981.637: 96.5572% ( 18) 00:07:40.750 9981.637 - 10032.049: 96.6472% ( 17) 00:07:40.750 10032.049 - 10082.462: 96.7214% ( 14) 00:07:40.750 10082.462 - 10132.874: 96.8167% ( 18) 00:07:40.750 10132.874 - 10183.286: 96.9068% ( 17) 00:07:40.750 10183.286 - 10233.698: 96.9756% ( 13) 00:07:40.750 10233.698 - 10284.111: 97.0498% ( 14) 00:07:40.750 10284.111 - 10334.523: 97.0869% ( 7) 00:07:40.750 10334.523 - 10384.935: 97.1239% ( 7) 00:07:40.750 10384.935 - 10435.348: 97.1557% ( 6) 00:07:40.750 10435.348 - 10485.760: 97.1981% ( 8) 00:07:40.750 10485.760 - 10536.172: 97.2564% ( 11) 00:07:40.750 10536.172 - 10586.585: 97.3199% ( 12) 00:07:40.750 10586.585 - 10636.997: 97.3729% ( 10) 00:07:40.750 10636.997 - 10687.409: 97.4311% ( 11) 00:07:40.750 10687.409 - 10737.822: 97.4841% ( 10) 00:07:40.750 10737.822 - 10788.234: 97.5159% ( 6) 00:07:40.750 10788.234 - 10838.646: 97.5477% ( 6) 00:07:40.750 10838.646 - 10889.058: 97.5742% ( 5) 00:07:40.750 10889.058 - 10939.471: 97.6059% ( 6) 00:07:40.750 10939.471 - 10989.883: 97.6377% ( 6) 00:07:40.750 10989.883 - 11040.295: 97.6695% ( 6) 00:07:40.750 11040.295 - 11090.708: 97.7066% ( 7) 00:07:40.750 11090.708 - 11141.120: 97.7331% ( 5) 00:07:40.750 11141.120 - 11191.532: 97.7648% ( 6) 00:07:40.750 11191.532 - 11241.945: 97.7966% ( 6) 00:07:40.750 11241.945 - 11292.357: 97.8284% ( 6) 00:07:40.750 11292.357 - 11342.769: 97.8602% ( 6) 00:07:40.750 11342.769 - 11393.182: 97.8972% ( 7) 00:07:40.750 11393.182 - 11443.594: 97.9661% ( 13) 00:07:40.750 11443.594 - 11494.006: 97.9873% ( 4) 00:07:40.750 11494.006 - 11544.418: 98.0244% ( 7) 00:07:40.750 11544.418 - 11594.831: 98.0561% ( 6) 00:07:40.750 11594.831 - 11645.243: 98.0826% ( 5) 00:07:40.750 11645.243 - 11695.655: 98.1038% ( 4) 00:07:40.750 11695.655 - 11746.068: 98.1356% ( 6) 00:07:40.750 11746.068 - 11796.480: 98.1833% ( 9) 00:07:40.750 11796.480 - 11846.892: 98.2150% ( 6) 00:07:40.750 11846.892 - 11897.305: 98.2415% ( 5) 00:07:40.750 11897.305 - 11947.717: 98.2627% ( 4) 00:07:40.750 11947.717 - 11998.129: 98.2892% ( 5) 00:07:40.750 11998.129 - 12048.542: 98.3210% ( 6) 00:07:40.750 12048.542 - 12098.954: 98.3475% ( 5) 00:07:40.750 12098.954 - 12149.366: 98.3792% ( 6) 00:07:40.750 12149.366 - 12199.778: 98.4163% ( 7) 00:07:40.750 12199.778 - 12250.191: 98.4322% ( 3) 00:07:40.750 12250.191 - 12300.603: 98.4534% ( 4) 00:07:40.750 12300.603 - 12351.015: 98.4905% ( 7) 00:07:40.750 12351.015 - 12401.428: 98.5275% ( 7) 00:07:40.750 12401.428 - 12451.840: 98.5593% ( 6) 00:07:40.750 12451.840 - 12502.252: 98.5858% ( 5) 00:07:40.750 12502.252 - 12552.665: 98.6017% ( 3) 00:07:40.750 12552.665 - 12603.077: 98.6123% ( 2) 00:07:40.750 12603.077 - 12653.489: 98.6229% ( 2) 00:07:40.750 12653.489 - 12703.902: 98.6335% ( 2) 00:07:40.750 12703.902 - 12754.314: 98.6388% ( 1) 00:07:40.750 12754.314 - 12804.726: 98.6547% ( 3) 00:07:40.750 12804.726 - 12855.138: 98.6706% ( 3) 00:07:40.750 12855.138 - 12905.551: 98.6917% ( 4) 00:07:40.750 12905.551 - 13006.375: 98.7235% ( 6) 00:07:40.750 13006.375 - 13107.200: 98.7606% ( 7) 00:07:40.750 13107.200 - 13208.025: 98.7977% ( 7) 00:07:40.750 13208.025 - 13308.849: 98.8347% ( 7) 00:07:40.750 13308.849 - 13409.674: 98.8665% ( 6) 00:07:40.750 13409.674 - 13510.498: 98.8771% ( 2) 00:07:40.750 13510.498 - 13611.323: 98.8930% ( 3) 00:07:40.750 13611.323 - 13712.148: 98.9089% ( 3) 00:07:40.750 13712.148 - 13812.972: 98.9195% ( 2) 00:07:40.750 13812.972 - 13913.797: 98.9354% ( 3) 00:07:40.750 13913.797 - 14014.622: 98.9513% ( 3) 00:07:40.750 14014.622 - 14115.446: 98.9619% ( 2) 00:07:40.750 14115.446 - 14216.271: 98.9778% ( 3) 00:07:40.750 14216.271 - 14317.095: 99.0095% ( 6) 00:07:40.750 14317.095 - 14417.920: 99.0148% ( 1) 00:07:40.750 14417.920 - 14518.745: 99.0307% ( 3) 00:07:40.750 14518.745 - 14619.569: 99.0466% ( 3) 00:07:40.750 14619.569 - 14720.394: 99.0625% ( 3) 00:07:40.750 14720.394 - 14821.218: 99.0784% ( 3) 00:07:40.750 14821.218 - 14922.043: 99.0943% ( 3) 00:07:40.750 14922.043 - 15022.868: 99.1102% ( 3) 00:07:40.750 15022.868 - 15123.692: 99.1261% ( 3) 00:07:40.750 15123.692 - 15224.517: 99.1419% ( 3) 00:07:40.750 15224.517 - 15325.342: 99.1578% ( 3) 00:07:40.750 15325.342 - 15426.166: 99.1737% ( 3) 00:07:40.750 15426.166 - 15526.991: 99.1949% ( 4) 00:07:40.750 15526.991 - 15627.815: 99.2055% ( 2) 00:07:40.750 15627.815 - 15728.640: 99.2214% ( 3) 00:07:40.750 15728.640 - 15829.465: 99.2373% ( 3) 00:07:40.750 15829.465 - 15930.289: 99.2532% ( 3) 00:07:40.750 15930.289 - 16031.114: 99.2691% ( 3) 00:07:40.750 16031.114 - 16131.938: 99.2797% ( 2) 00:07:40.750 16131.938 - 16232.763: 99.2956% ( 3) 00:07:40.750 16232.763 - 16333.588: 99.3379% ( 8) 00:07:40.750 16333.588 - 16434.412: 99.3644% ( 5) 00:07:40.750 16434.412 - 16535.237: 99.3856% ( 4) 00:07:40.750 16535.237 - 16636.062: 99.4068% ( 4) 00:07:40.750 16636.062 - 16736.886: 99.4280% ( 4) 00:07:40.750 16736.886 - 16837.711: 99.4492% ( 4) 00:07:40.750 16837.711 - 16938.535: 99.4650% ( 3) 00:07:40.750 16938.535 - 17039.360: 99.4862% ( 4) 00:07:40.750 17039.360 - 17140.185: 99.5074% ( 4) 00:07:40.750 17140.185 - 17241.009: 99.5286% ( 4) 00:07:40.750 17241.009 - 17341.834: 99.5498% ( 4) 00:07:40.750 17341.834 - 17442.658: 99.5710% ( 4) 00:07:40.750 17442.658 - 17543.483: 99.5922% ( 4) 00:07:40.750 17543.483 - 17644.308: 99.6081% ( 3) 00:07:40.750 17644.308 - 17745.132: 99.6292% ( 4) 00:07:40.750 17745.132 - 17845.957: 99.6504% ( 4) 00:07:40.750 17845.957 - 17946.782: 99.6610% ( 2) 00:07:40.750 21273.994 - 21374.818: 99.6769% ( 3) 00:07:40.750 21374.818 - 21475.643: 99.6928% ( 3) 00:07:40.750 21475.643 - 21576.468: 99.7140% ( 4) 00:07:40.750 21576.468 - 21677.292: 99.7352% ( 4) 00:07:40.750 21677.292 - 21778.117: 99.7564% ( 4) 00:07:40.750 21778.117 - 21878.942: 99.7775% ( 4) 00:07:40.750 21878.942 - 21979.766: 99.7987% ( 4) 00:07:40.750 21979.766 - 22080.591: 99.8146% ( 3) 00:07:40.750 22080.591 - 22181.415: 99.8358% ( 4) 00:07:40.750 22181.415 - 22282.240: 99.8570% ( 4) 00:07:40.750 22282.240 - 22383.065: 99.8782% ( 4) 00:07:40.750 22383.065 - 22483.889: 99.8994% ( 4) 00:07:40.750 22483.889 - 22584.714: 99.9206% ( 4) 00:07:40.750 22584.714 - 22685.538: 99.9417% ( 4) 00:07:40.750 22685.538 - 22786.363: 99.9629% ( 4) 00:07:40.750 22786.363 - 22887.188: 99.9841% ( 4) 00:07:40.750 22887.188 - 22988.012: 100.0000% ( 3) 00:07:40.750 00:07:40.750 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.750 ============================================================================== 00:07:40.750 Range in us Cumulative IO count 00:07:40.750 5646.178 - 5671.385: 0.0053% ( 1) 00:07:40.750 5671.385 - 5696.591: 0.0212% ( 3) 00:07:40.750 5696.591 - 5721.797: 0.1059% ( 16) 00:07:40.750 5721.797 - 5747.003: 0.2489% ( 27) 00:07:40.750 5747.003 - 5772.209: 0.7150% ( 88) 00:07:40.750 5772.209 - 5797.415: 1.6102% ( 169) 00:07:40.750 5797.415 - 5822.622: 2.9025% ( 244) 00:07:40.750 5822.622 - 5847.828: 4.7034% ( 340) 00:07:40.750 5847.828 - 5873.034: 6.6578% ( 369) 00:07:40.750 5873.034 - 5898.240: 8.6706% ( 380) 00:07:40.750 5898.240 - 5923.446: 10.6356% ( 371) 00:07:40.750 5923.446 - 5948.652: 12.8284% ( 414) 00:07:40.750 5948.652 - 5973.858: 15.0318% ( 416) 00:07:40.750 5973.858 - 5999.065: 17.0021% ( 372) 00:07:40.750 5999.065 - 6024.271: 19.0678% ( 390) 00:07:40.750 6024.271 - 6049.477: 21.0540% ( 375) 00:07:40.750 6049.477 - 6074.683: 23.1568% ( 397) 00:07:40.750 6074.683 - 6099.889: 25.3655% ( 417) 00:07:40.751 6099.889 - 6125.095: 27.5742% ( 417) 00:07:40.751 6125.095 - 6150.302: 29.7881% ( 418) 00:07:40.751 6150.302 - 6175.508: 32.2193% ( 459) 00:07:40.751 6175.508 - 6200.714: 34.5975% ( 449) 00:07:40.751 6200.714 - 6225.920: 36.9756% ( 449) 00:07:40.751 6225.920 - 6251.126: 39.3644% ( 451) 00:07:40.751 6251.126 - 6276.332: 41.6631% ( 434) 00:07:40.751 6276.332 - 6301.538: 43.9672% ( 435) 00:07:40.751 6301.538 - 6326.745: 46.3136% ( 443) 00:07:40.751 6326.745 - 6351.951: 48.7341% ( 457) 00:07:40.751 6351.951 - 6377.157: 51.1494% ( 456) 00:07:40.751 6377.157 - 6402.363: 53.5381% ( 451) 00:07:40.751 6402.363 - 6427.569: 56.0434% ( 473) 00:07:40.751 6427.569 - 6452.775: 58.4905% ( 462) 00:07:40.751 6452.775 - 6503.188: 63.2998% ( 908) 00:07:40.751 6503.188 - 6553.600: 68.0350% ( 894) 00:07:40.751 6553.600 - 6604.012: 72.5689% ( 856) 00:07:40.751 6604.012 - 6654.425: 75.9852% ( 645) 00:07:40.751 6654.425 - 6704.837: 78.1621% ( 411) 00:07:40.751 6704.837 - 6755.249: 79.7564% ( 301) 00:07:40.751 6755.249 - 6805.662: 80.9216% ( 220) 00:07:40.751 6805.662 - 6856.074: 81.8432% ( 174) 00:07:40.751 6856.074 - 6906.486: 82.6430% ( 151) 00:07:40.751 6906.486 - 6956.898: 83.3686% ( 137) 00:07:40.751 6956.898 - 7007.311: 83.8877% ( 98) 00:07:40.751 7007.311 - 7057.723: 84.3432% ( 86) 00:07:40.751 7057.723 - 7108.135: 84.7669% ( 80) 00:07:40.751 7108.135 - 7158.548: 85.1165% ( 66) 00:07:40.751 7158.548 - 7208.960: 85.4661% ( 66) 00:07:40.751 7208.960 - 7259.372: 85.8475% ( 72) 00:07:40.751 7259.372 - 7309.785: 86.1653% ( 60) 00:07:40.751 7309.785 - 7360.197: 86.4725% ( 58) 00:07:40.751 7360.197 - 7410.609: 86.7585% ( 54) 00:07:40.751 7410.609 - 7461.022: 87.0339% ( 52) 00:07:40.751 7461.022 - 7511.434: 87.3093% ( 52) 00:07:40.751 7511.434 - 7561.846: 87.5742% ( 50) 00:07:40.751 7561.846 - 7612.258: 87.8919% ( 60) 00:07:40.751 7612.258 - 7662.671: 88.1303% ( 45) 00:07:40.751 7662.671 - 7713.083: 88.3528% ( 42) 00:07:40.751 7713.083 - 7763.495: 88.5593% ( 39) 00:07:40.751 7763.495 - 7813.908: 88.7977% ( 45) 00:07:40.751 7813.908 - 7864.320: 89.0254% ( 43) 00:07:40.751 7864.320 - 7914.732: 89.2797% ( 48) 00:07:40.751 7914.732 - 7965.145: 89.4915% ( 40) 00:07:40.751 7965.145 - 8015.557: 89.7140% ( 42) 00:07:40.751 8015.557 - 8065.969: 89.9311% ( 41) 00:07:40.751 8065.969 - 8116.382: 90.1112% ( 34) 00:07:40.751 8116.382 - 8166.794: 90.2966% ( 35) 00:07:40.751 8166.794 - 8217.206: 90.5667% ( 51) 00:07:40.751 8217.206 - 8267.618: 90.7892% ( 42) 00:07:40.751 8267.618 - 8318.031: 90.9693% ( 34) 00:07:40.751 8318.031 - 8368.443: 91.1441% ( 33) 00:07:40.751 8368.443 - 8418.855: 91.3506% ( 39) 00:07:40.751 8418.855 - 8469.268: 91.5572% ( 39) 00:07:40.751 8469.268 - 8519.680: 91.7373% ( 34) 00:07:40.751 8519.680 - 8570.092: 91.9492% ( 40) 00:07:40.751 8570.092 - 8620.505: 92.1292% ( 34) 00:07:40.751 8620.505 - 8670.917: 92.3464% ( 41) 00:07:40.751 8670.917 - 8721.329: 92.5424% ( 37) 00:07:40.751 8721.329 - 8771.742: 92.7966% ( 48) 00:07:40.751 8771.742 - 8822.154: 92.9979% ( 38) 00:07:40.751 8822.154 - 8872.566: 93.2097% ( 40) 00:07:40.751 8872.566 - 8922.978: 93.4163% ( 39) 00:07:40.751 8922.978 - 8973.391: 93.6017% ( 35) 00:07:40.751 8973.391 - 9023.803: 93.7341% ( 25) 00:07:40.751 9023.803 - 9074.215: 93.8771% ( 27) 00:07:40.751 9074.215 - 9124.628: 94.0201% ( 27) 00:07:40.751 9124.628 - 9175.040: 94.1631% ( 27) 00:07:40.751 9175.040 - 9225.452: 94.3114% ( 28) 00:07:40.751 9225.452 - 9275.865: 94.4333% ( 23) 00:07:40.751 9275.865 - 9326.277: 94.5445% ( 21) 00:07:40.751 9326.277 - 9376.689: 94.6875% ( 27) 00:07:40.751 9376.689 - 9427.102: 94.7934% ( 20) 00:07:40.751 9427.102 - 9477.514: 94.9364% ( 27) 00:07:40.751 9477.514 - 9527.926: 95.1271% ( 36) 00:07:40.751 9527.926 - 9578.338: 95.2595% ( 25) 00:07:40.751 9578.338 - 9628.751: 95.4025% ( 27) 00:07:40.751 9628.751 - 9679.163: 95.5561% ( 29) 00:07:40.751 9679.163 - 9729.575: 95.6780% ( 23) 00:07:40.751 9729.575 - 9779.988: 95.8157% ( 26) 00:07:40.751 9779.988 - 9830.400: 95.9481% ( 25) 00:07:40.751 9830.400 - 9880.812: 96.0805% ( 25) 00:07:40.751 9880.812 - 9931.225: 96.1864% ( 20) 00:07:40.751 9931.225 - 9981.637: 96.2765% ( 17) 00:07:40.751 9981.637 - 10032.049: 96.3665% ( 17) 00:07:40.751 10032.049 - 10082.462: 96.4619% ( 18) 00:07:40.751 10082.462 - 10132.874: 96.5678% ( 20) 00:07:40.751 10132.874 - 10183.286: 96.6790% ( 21) 00:07:40.751 10183.286 - 10233.698: 96.7638% ( 16) 00:07:40.751 10233.698 - 10284.111: 96.8697% ( 20) 00:07:40.751 10284.111 - 10334.523: 96.9809% ( 21) 00:07:40.751 10334.523 - 10384.935: 97.0816% ( 19) 00:07:40.751 10384.935 - 10435.348: 97.1875% ( 20) 00:07:40.751 10435.348 - 10485.760: 97.2722% ( 16) 00:07:40.751 10485.760 - 10536.172: 97.3464% ( 14) 00:07:40.751 10536.172 - 10586.585: 97.4206% ( 14) 00:07:40.751 10586.585 - 10636.997: 97.4841% ( 12) 00:07:40.751 10636.997 - 10687.409: 97.5530% ( 13) 00:07:40.751 10687.409 - 10737.822: 97.6112% ( 11) 00:07:40.751 10737.822 - 10788.234: 97.6483% ( 7) 00:07:40.751 10788.234 - 10838.646: 97.6907% ( 8) 00:07:40.751 10838.646 - 10889.058: 97.7172% ( 5) 00:07:40.751 10889.058 - 10939.471: 97.7489% ( 6) 00:07:40.751 10939.471 - 10989.883: 97.7860% ( 7) 00:07:40.751 10989.883 - 11040.295: 97.8284% ( 8) 00:07:40.751 11040.295 - 11090.708: 97.8655% ( 7) 00:07:40.751 11090.708 - 11141.120: 97.8867% ( 4) 00:07:40.751 11141.120 - 11191.532: 97.9025% ( 3) 00:07:40.751 11191.532 - 11241.945: 97.9290% ( 5) 00:07:40.751 11241.945 - 11292.357: 97.9396% ( 2) 00:07:40.751 11292.357 - 11342.769: 97.9555% ( 3) 00:07:40.751 11342.769 - 11393.182: 97.9714% ( 3) 00:07:40.751 11393.182 - 11443.594: 97.9926% ( 4) 00:07:40.751 11443.594 - 11494.006: 98.0138% ( 4) 00:07:40.751 11494.006 - 11544.418: 98.0350% ( 4) 00:07:40.751 11544.418 - 11594.831: 98.0561% ( 4) 00:07:40.751 11594.831 - 11645.243: 98.0720% ( 3) 00:07:40.751 11645.243 - 11695.655: 98.0932% ( 4) 00:07:40.751 11695.655 - 11746.068: 98.1038% ( 2) 00:07:40.751 11746.068 - 11796.480: 98.1250% ( 4) 00:07:40.751 11796.480 - 11846.892: 98.1409% ( 3) 00:07:40.751 11846.892 - 11897.305: 98.1833% ( 8) 00:07:40.751 11897.305 - 11947.717: 98.2203% ( 7) 00:07:40.751 11947.717 - 11998.129: 98.2468% ( 5) 00:07:40.751 11998.129 - 12048.542: 98.2839% ( 7) 00:07:40.751 12048.542 - 12098.954: 98.3104% ( 5) 00:07:40.751 12098.954 - 12149.366: 98.3422% ( 6) 00:07:40.751 12149.366 - 12199.778: 98.3686% ( 5) 00:07:40.751 12199.778 - 12250.191: 98.4004% ( 6) 00:07:40.751 12250.191 - 12300.603: 98.4269% ( 5) 00:07:40.751 12300.603 - 12351.015: 98.4534% ( 5) 00:07:40.751 12351.015 - 12401.428: 98.4905% ( 7) 00:07:40.751 12401.428 - 12451.840: 98.5222% ( 6) 00:07:40.751 12451.840 - 12502.252: 98.5646% ( 8) 00:07:40.751 12502.252 - 12552.665: 98.6070% ( 8) 00:07:40.751 12552.665 - 12603.077: 98.6441% ( 7) 00:07:40.751 12603.077 - 12653.489: 98.6600% ( 3) 00:07:40.751 12653.489 - 12703.902: 98.6917% ( 6) 00:07:40.751 12703.902 - 12754.314: 98.7076% ( 3) 00:07:40.751 12754.314 - 12804.726: 98.7129% ( 1) 00:07:40.751 12804.726 - 12855.138: 98.7182% ( 1) 00:07:40.751 12855.138 - 12905.551: 98.7288% ( 2) 00:07:40.751 12905.551 - 13006.375: 98.7500% ( 4) 00:07:40.751 13006.375 - 13107.200: 98.7606% ( 2) 00:07:40.751 13107.200 - 13208.025: 98.7765% ( 3) 00:07:40.751 13208.025 - 13308.849: 98.7924% ( 3) 00:07:40.751 13308.849 - 13409.674: 98.8083% ( 3) 00:07:40.751 13409.674 - 13510.498: 98.8242% ( 3) 00:07:40.751 13510.498 - 13611.323: 98.8400% ( 3) 00:07:40.751 13611.323 - 13712.148: 98.8506% ( 2) 00:07:40.751 13712.148 - 13812.972: 98.8665% ( 3) 00:07:40.751 13812.972 - 13913.797: 98.8824% ( 3) 00:07:40.751 13913.797 - 14014.622: 98.8983% ( 3) 00:07:40.751 14014.622 - 14115.446: 98.9142% ( 3) 00:07:40.751 14115.446 - 14216.271: 98.9248% ( 2) 00:07:40.751 14216.271 - 14317.095: 98.9407% ( 3) 00:07:40.751 14317.095 - 14417.920: 98.9566% ( 3) 00:07:40.751 14417.920 - 14518.745: 98.9778% ( 4) 00:07:40.751 14518.745 - 14619.569: 99.0095% ( 6) 00:07:40.751 14619.569 - 14720.394: 99.0466% ( 7) 00:07:40.751 14720.394 - 14821.218: 99.1367% ( 17) 00:07:40.751 14821.218 - 14922.043: 99.1790% ( 8) 00:07:40.751 14922.043 - 15022.868: 99.2108% ( 6) 00:07:40.751 15022.868 - 15123.692: 99.2585% ( 9) 00:07:40.751 15123.692 - 15224.517: 99.2956% ( 7) 00:07:40.751 15224.517 - 15325.342: 99.3432% ( 9) 00:07:40.751 15325.342 - 15426.166: 99.3856% ( 8) 00:07:40.751 15426.166 - 15526.991: 99.4333% ( 9) 00:07:40.751 15526.991 - 15627.815: 99.4756% ( 8) 00:07:40.751 15627.815 - 15728.640: 99.5233% ( 9) 00:07:40.751 15728.640 - 15829.465: 99.5657% ( 8) 00:07:40.751 15829.465 - 15930.289: 99.6133% ( 9) 00:07:40.751 15930.289 - 16031.114: 99.6345% ( 4) 00:07:40.751 16031.114 - 16131.938: 99.6557% ( 4) 00:07:40.751 16131.938 - 16232.763: 99.6610% ( 1) 00:07:40.751 19559.975 - 19660.800: 99.6769% ( 3) 00:07:40.751 19660.800 - 19761.625: 99.6981% ( 4) 00:07:40.751 19761.625 - 19862.449: 99.7193% ( 4) 00:07:40.751 19862.449 - 19963.274: 99.7405% ( 4) 00:07:40.751 19963.274 - 20064.098: 99.7617% ( 4) 00:07:40.751 20064.098 - 20164.923: 99.7828% ( 4) 00:07:40.751 20164.923 - 20265.748: 99.7987% ( 3) 00:07:40.751 20265.748 - 20366.572: 99.8093% ( 2) 00:07:40.751 20366.572 - 20467.397: 99.8252% ( 3) 00:07:40.751 20467.397 - 20568.222: 99.8464% ( 4) 00:07:40.751 20568.222 - 20669.046: 99.8676% ( 4) 00:07:40.751 20669.046 - 20769.871: 99.8888% ( 4) 00:07:40.751 20769.871 - 20870.695: 99.9100% ( 4) 00:07:40.751 20870.695 - 20971.520: 99.9311% ( 4) 00:07:40.751 20971.520 - 21072.345: 99.9470% ( 3) 00:07:40.751 21072.345 - 21173.169: 99.9682% ( 4) 00:07:40.751 21173.169 - 21273.994: 99.9894% ( 4) 00:07:40.751 21273.994 - 21374.818: 100.0000% ( 2) 00:07:40.751 00:07:40.751 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.751 ============================================================================== 00:07:40.751 Range in us Cumulative IO count 00:07:40.752 5696.591 - 5721.797: 0.0530% ( 10) 00:07:40.752 5721.797 - 5747.003: 0.2701% ( 41) 00:07:40.752 5747.003 - 5772.209: 0.7097% ( 83) 00:07:40.752 5772.209 - 5797.415: 1.5148% ( 152) 00:07:40.752 5797.415 - 5822.622: 2.8496% ( 252) 00:07:40.752 5822.622 - 5847.828: 4.6451% ( 339) 00:07:40.752 5847.828 - 5873.034: 6.5996% ( 369) 00:07:40.752 5873.034 - 5898.240: 8.7553% ( 407) 00:07:40.752 5898.240 - 5923.446: 10.7468% ( 376) 00:07:40.752 5923.446 - 5948.652: 12.7754% ( 383) 00:07:40.752 5948.652 - 5973.858: 14.8729% ( 396) 00:07:40.752 5973.858 - 5999.065: 16.8379% ( 371) 00:07:40.752 5999.065 - 6024.271: 18.8453% ( 379) 00:07:40.752 6024.271 - 6049.477: 20.9322% ( 394) 00:07:40.752 6049.477 - 6074.683: 23.1515% ( 419) 00:07:40.752 6074.683 - 6099.889: 25.2754% ( 401) 00:07:40.752 6099.889 - 6125.095: 27.4576% ( 412) 00:07:40.752 6125.095 - 6150.302: 29.8040% ( 443) 00:07:40.752 6150.302 - 6175.508: 32.2193% ( 456) 00:07:40.752 6175.508 - 6200.714: 34.6292% ( 455) 00:07:40.752 6200.714 - 6225.920: 36.9015% ( 429) 00:07:40.752 6225.920 - 6251.126: 39.3220% ( 457) 00:07:40.752 6251.126 - 6276.332: 41.6684% ( 443) 00:07:40.752 6276.332 - 6301.538: 43.9672% ( 434) 00:07:40.752 6301.538 - 6326.745: 46.3189% ( 444) 00:07:40.752 6326.745 - 6351.951: 48.7182% ( 453) 00:07:40.752 6351.951 - 6377.157: 51.0487% ( 440) 00:07:40.752 6377.157 - 6402.363: 53.4004% ( 444) 00:07:40.752 6402.363 - 6427.569: 55.7945% ( 452) 00:07:40.752 6427.569 - 6452.775: 58.1674% ( 448) 00:07:40.752 6452.775 - 6503.188: 62.9555% ( 904) 00:07:40.752 6503.188 - 6553.600: 67.7278% ( 901) 00:07:40.752 6553.600 - 6604.012: 72.1769% ( 840) 00:07:40.752 6604.012 - 6654.425: 75.5614% ( 639) 00:07:40.752 6654.425 - 6704.837: 77.8919% ( 440) 00:07:40.752 6704.837 - 6755.249: 79.4915% ( 302) 00:07:40.752 6755.249 - 6805.662: 80.6515% ( 219) 00:07:40.752 6805.662 - 6856.074: 81.5625% ( 172) 00:07:40.752 6856.074 - 6906.486: 82.3941% ( 157) 00:07:40.752 6906.486 - 6956.898: 83.0773% ( 129) 00:07:40.752 6956.898 - 7007.311: 83.6229% ( 103) 00:07:40.752 7007.311 - 7057.723: 84.0625% ( 83) 00:07:40.752 7057.723 - 7108.135: 84.5127% ( 85) 00:07:40.752 7108.135 - 7158.548: 84.8888% ( 71) 00:07:40.752 7158.548 - 7208.960: 85.2066% ( 60) 00:07:40.752 7208.960 - 7259.372: 85.5667% ( 68) 00:07:40.752 7259.372 - 7309.785: 85.9057% ( 64) 00:07:40.752 7309.785 - 7360.197: 86.2606% ( 67) 00:07:40.752 7360.197 - 7410.609: 86.5784% ( 60) 00:07:40.752 7410.609 - 7461.022: 86.9121% ( 63) 00:07:40.752 7461.022 - 7511.434: 87.1928% ( 53) 00:07:40.752 7511.434 - 7561.846: 87.4523% ( 49) 00:07:40.752 7561.846 - 7612.258: 87.6854% ( 44) 00:07:40.752 7612.258 - 7662.671: 87.9502% ( 50) 00:07:40.752 7662.671 - 7713.083: 88.1992% ( 47) 00:07:40.752 7713.083 - 7763.495: 88.4375% ( 45) 00:07:40.752 7763.495 - 7813.908: 88.6758% ( 45) 00:07:40.752 7813.908 - 7864.320: 88.9566% ( 53) 00:07:40.752 7864.320 - 7914.732: 89.1790% ( 42) 00:07:40.752 7914.732 - 7965.145: 89.4333% ( 48) 00:07:40.752 7965.145 - 8015.557: 89.6345% ( 38) 00:07:40.752 8015.557 - 8065.969: 89.8464% ( 40) 00:07:40.752 8065.969 - 8116.382: 90.0530% ( 39) 00:07:40.752 8116.382 - 8166.794: 90.2331% ( 34) 00:07:40.752 8166.794 - 8217.206: 90.4237% ( 36) 00:07:40.752 8217.206 - 8267.618: 90.6462% ( 42) 00:07:40.752 8267.618 - 8318.031: 90.8951% ( 47) 00:07:40.752 8318.031 - 8368.443: 91.2182% ( 61) 00:07:40.752 8368.443 - 8418.855: 91.4619% ( 46) 00:07:40.752 8418.855 - 8469.268: 91.6843% ( 42) 00:07:40.752 8469.268 - 8519.680: 91.9227% ( 45) 00:07:40.752 8519.680 - 8570.092: 92.1557% ( 44) 00:07:40.752 8570.092 - 8620.505: 92.3464% ( 36) 00:07:40.752 8620.505 - 8670.917: 92.5318% ( 35) 00:07:40.752 8670.917 - 8721.329: 92.6960% ( 31) 00:07:40.752 8721.329 - 8771.742: 92.8549% ( 30) 00:07:40.752 8771.742 - 8822.154: 93.0191% ( 31) 00:07:40.752 8822.154 - 8872.566: 93.2097% ( 36) 00:07:40.752 8872.566 - 8922.978: 93.3845% ( 33) 00:07:40.752 8922.978 - 8973.391: 93.5805% ( 37) 00:07:40.752 8973.391 - 9023.803: 93.7818% ( 38) 00:07:40.752 9023.803 - 9074.215: 93.9831% ( 38) 00:07:40.752 9074.215 - 9124.628: 94.1367% ( 29) 00:07:40.752 9124.628 - 9175.040: 94.2532% ( 22) 00:07:40.752 9175.040 - 9225.452: 94.4068% ( 29) 00:07:40.752 9225.452 - 9275.865: 94.5498% ( 27) 00:07:40.752 9275.865 - 9326.277: 94.6610% ( 21) 00:07:40.752 9326.277 - 9376.689: 94.7775% ( 22) 00:07:40.752 9376.689 - 9427.102: 94.8941% ( 22) 00:07:40.752 9427.102 - 9477.514: 95.0159% ( 23) 00:07:40.752 9477.514 - 9527.926: 95.1430% ( 24) 00:07:40.752 9527.926 - 9578.338: 95.2754% ( 25) 00:07:40.752 9578.338 - 9628.751: 95.4131% ( 26) 00:07:40.752 9628.751 - 9679.163: 95.5297% ( 22) 00:07:40.752 9679.163 - 9729.575: 95.6197% ( 17) 00:07:40.752 9729.575 - 9779.988: 95.7309% ( 21) 00:07:40.752 9779.988 - 9830.400: 95.8157% ( 16) 00:07:40.752 9830.400 - 9880.812: 95.9163% ( 19) 00:07:40.752 9880.812 - 9931.225: 96.0275% ( 21) 00:07:40.752 9931.225 - 9981.637: 96.1494% ( 23) 00:07:40.752 9981.637 - 10032.049: 96.2659% ( 22) 00:07:40.752 10032.049 - 10082.462: 96.4089% ( 27) 00:07:40.752 10082.462 - 10132.874: 96.5678% ( 30) 00:07:40.752 10132.874 - 10183.286: 96.7267% ( 30) 00:07:40.752 10183.286 - 10233.698: 96.8538% ( 24) 00:07:40.752 10233.698 - 10284.111: 96.9597% ( 20) 00:07:40.752 10284.111 - 10334.523: 97.0551% ( 18) 00:07:40.752 10334.523 - 10384.935: 97.1557% ( 19) 00:07:40.752 10384.935 - 10435.348: 97.2511% ( 18) 00:07:40.752 10435.348 - 10485.760: 97.3464% ( 18) 00:07:40.752 10485.760 - 10536.172: 97.4311% ( 16) 00:07:40.752 10536.172 - 10586.585: 97.5159% ( 16) 00:07:40.752 10586.585 - 10636.997: 97.5953% ( 15) 00:07:40.752 10636.997 - 10687.409: 97.6748% ( 15) 00:07:40.752 10687.409 - 10737.822: 97.7331% ( 11) 00:07:40.752 10737.822 - 10788.234: 97.7913% ( 11) 00:07:40.752 10788.234 - 10838.646: 97.8337% ( 8) 00:07:40.752 10838.646 - 10889.058: 97.8602% ( 5) 00:07:40.752 10889.058 - 10939.471: 97.8919% ( 6) 00:07:40.752 10939.471 - 10989.883: 97.9290% ( 7) 00:07:40.752 10989.883 - 11040.295: 97.9608% ( 6) 00:07:40.752 11040.295 - 11090.708: 97.9979% ( 7) 00:07:40.752 11090.708 - 11141.120: 98.0297% ( 6) 00:07:40.752 11141.120 - 11191.532: 98.0456% ( 3) 00:07:40.752 11191.532 - 11241.945: 98.0826% ( 7) 00:07:40.752 11241.945 - 11292.357: 98.1091% ( 5) 00:07:40.752 11292.357 - 11342.769: 98.1409% ( 6) 00:07:40.752 11342.769 - 11393.182: 98.1674% ( 5) 00:07:40.752 11393.182 - 11443.594: 98.1833% ( 3) 00:07:40.752 11443.594 - 11494.006: 98.2044% ( 4) 00:07:40.752 11494.006 - 11544.418: 98.2203% ( 3) 00:07:40.752 11544.418 - 11594.831: 98.2415% ( 4) 00:07:40.752 11594.831 - 11645.243: 98.2627% ( 4) 00:07:40.752 11645.243 - 11695.655: 98.2839% ( 4) 00:07:40.752 11695.655 - 11746.068: 98.2998% ( 3) 00:07:40.752 11746.068 - 11796.480: 98.3104% ( 2) 00:07:40.752 11796.480 - 11846.892: 98.3263% ( 3) 00:07:40.752 11846.892 - 11897.305: 98.3475% ( 4) 00:07:40.752 11897.305 - 11947.717: 98.3686% ( 4) 00:07:40.752 11947.717 - 11998.129: 98.3845% ( 3) 00:07:40.752 11998.129 - 12048.542: 98.4004% ( 3) 00:07:40.752 12048.542 - 12098.954: 98.4216% ( 4) 00:07:40.752 12098.954 - 12149.366: 98.4375% ( 3) 00:07:40.752 12149.366 - 12199.778: 98.4587% ( 4) 00:07:40.752 12199.778 - 12250.191: 98.4746% ( 3) 00:07:40.752 12250.191 - 12300.603: 98.4905% ( 3) 00:07:40.752 12300.603 - 12351.015: 98.5117% ( 4) 00:07:40.752 12351.015 - 12401.428: 98.5275% ( 3) 00:07:40.752 12401.428 - 12451.840: 98.5487% ( 4) 00:07:40.752 12451.840 - 12502.252: 98.5646% ( 3) 00:07:40.752 12502.252 - 12552.665: 98.5805% ( 3) 00:07:40.752 12552.665 - 12603.077: 98.5964% ( 3) 00:07:40.752 12603.077 - 12653.489: 98.6123% ( 3) 00:07:40.752 12653.489 - 12703.902: 98.6282% ( 3) 00:07:40.752 12703.902 - 12754.314: 98.6441% ( 3) 00:07:40.752 12754.314 - 12804.726: 98.6547% ( 2) 00:07:40.752 12804.726 - 12855.138: 98.6600% ( 1) 00:07:40.752 12855.138 - 12905.551: 98.6864% ( 5) 00:07:40.752 12905.551 - 13006.375: 98.7288% ( 8) 00:07:40.752 13006.375 - 13107.200: 98.7553% ( 5) 00:07:40.752 13107.200 - 13208.025: 98.7871% ( 6) 00:07:40.752 13208.025 - 13308.849: 98.8189% ( 6) 00:07:40.752 13308.849 - 13409.674: 98.8665% ( 9) 00:07:40.752 13409.674 - 13510.498: 98.9089% ( 8) 00:07:40.752 13510.498 - 13611.323: 98.9407% ( 6) 00:07:40.752 13611.323 - 13712.148: 98.9725% ( 6) 00:07:40.752 13712.148 - 13812.972: 99.0095% ( 7) 00:07:40.752 13812.972 - 13913.797: 99.0413% ( 6) 00:07:40.752 13913.797 - 14014.622: 99.0784% ( 7) 00:07:40.752 14014.622 - 14115.446: 99.1102% ( 6) 00:07:40.752 14115.446 - 14216.271: 99.1472% ( 7) 00:07:40.752 14216.271 - 14317.095: 99.1843% ( 7) 00:07:40.752 14317.095 - 14417.920: 99.2161% ( 6) 00:07:40.752 14417.920 - 14518.745: 99.3114% ( 18) 00:07:40.752 14518.745 - 14619.569: 99.3750% ( 12) 00:07:40.752 14619.569 - 14720.394: 99.4121% ( 7) 00:07:40.752 14720.394 - 14821.218: 99.4386% ( 5) 00:07:40.752 14821.218 - 14922.043: 99.4756% ( 7) 00:07:40.752 14922.043 - 15022.868: 99.5021% ( 5) 00:07:40.752 15022.868 - 15123.692: 99.5286% ( 5) 00:07:40.752 15123.692 - 15224.517: 99.5551% ( 5) 00:07:40.752 15224.517 - 15325.342: 99.5763% ( 4) 00:07:40.752 15325.342 - 15426.166: 99.6028% ( 5) 00:07:40.752 15426.166 - 15526.991: 99.6239% ( 4) 00:07:40.752 15526.991 - 15627.815: 99.6504% ( 5) 00:07:40.752 15627.815 - 15728.640: 99.6610% ( 2) 00:07:40.752 17845.957 - 17946.782: 99.6663% ( 1) 00:07:40.752 17946.782 - 18047.606: 99.6822% ( 3) 00:07:40.752 18047.606 - 18148.431: 99.7034% ( 4) 00:07:40.752 18148.431 - 18249.255: 99.7246% ( 4) 00:07:40.752 18249.255 - 18350.080: 99.7458% ( 4) 00:07:40.752 18350.080 - 18450.905: 99.7669% ( 4) 00:07:40.752 18450.905 - 18551.729: 99.7881% ( 4) 00:07:40.752 18551.729 - 18652.554: 99.8040% ( 3) 00:07:40.752 18652.554 - 18753.378: 99.8252% ( 4) 00:07:40.752 18753.378 - 18854.203: 99.8464% ( 4) 00:07:40.752 18854.203 - 18955.028: 99.8676% ( 4) 00:07:40.752 18955.028 - 19055.852: 99.8888% ( 4) 00:07:40.752 19055.852 - 19156.677: 99.9100% ( 4) 00:07:40.752 19156.677 - 19257.502: 99.9311% ( 4) 00:07:40.753 19257.502 - 19358.326: 99.9523% ( 4) 00:07:40.753 19358.326 - 19459.151: 99.9735% ( 4) 00:07:40.753 19459.151 - 19559.975: 99.9947% ( 4) 00:07:40.753 19559.975 - 19660.800: 100.0000% ( 1) 00:07:40.753 00:07:40.753 17:48:58 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:41.685 Initializing NVMe Controllers 00:07:41.685 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:41.685 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:41.685 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:41.685 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:41.685 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:41.685 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:41.685 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:41.685 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:41.685 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:41.685 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:41.685 Initialization complete. Launching workers. 00:07:41.685 ======================================================== 00:07:41.685 Latency(us) 00:07:41.685 Device Information : IOPS MiB/s Average min max 00:07:41.686 PCIE (0000:00:10.0) NSID 1 from core 0: 18547.18 217.35 6910.73 5299.66 32388.81 00:07:41.686 PCIE (0000:00:11.0) NSID 1 from core 0: 18547.18 217.35 6900.36 5576.61 30503.24 00:07:41.686 PCIE (0000:00:13.0) NSID 1 from core 0: 18547.18 217.35 6889.84 5525.63 29185.02 00:07:41.686 PCIE (0000:00:12.0) NSID 1 from core 0: 18547.18 217.35 6879.39 5452.13 27850.31 00:07:41.686 PCIE (0000:00:12.0) NSID 2 from core 0: 18547.18 217.35 6868.94 5433.09 26392.37 00:07:41.686 PCIE (0000:00:12.0) NSID 3 from core 0: 18547.18 217.35 6858.57 5456.29 24480.92 00:07:41.686 ======================================================== 00:07:41.686 Total : 111283.10 1304.10 6884.64 5299.66 32388.81 00:07:41.686 00:07:41.686 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:41.686 ================================================================================= 00:07:41.686 1.00000% : 5772.209us 00:07:41.686 10.00000% : 6150.302us 00:07:41.686 25.00000% : 6351.951us 00:07:41.686 50.00000% : 6654.425us 00:07:41.686 75.00000% : 7057.723us 00:07:41.686 90.00000% : 7662.671us 00:07:41.686 95.00000% : 8166.794us 00:07:41.686 98.00000% : 8922.978us 00:07:41.686 99.00000% : 10687.409us 00:07:41.686 99.50000% : 23895.434us 00:07:41.686 99.90000% : 31860.578us 00:07:41.686 99.99000% : 32465.526us 00:07:41.686 99.99900% : 32465.526us 00:07:41.686 99.99990% : 32465.526us 00:07:41.686 99.99999% : 32465.526us 00:07:41.686 00:07:41.686 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:41.686 ================================================================================= 00:07:41.686 1.00000% : 5873.034us 00:07:41.686 10.00000% : 6200.714us 00:07:41.686 25.00000% : 6377.157us 00:07:41.686 50.00000% : 6604.012us 00:07:41.686 75.00000% : 7057.723us 00:07:41.686 90.00000% : 7662.671us 00:07:41.686 95.00000% : 8166.794us 00:07:41.686 98.00000% : 8822.154us 00:07:41.686 99.00000% : 10132.874us 00:07:41.686 99.50000% : 23592.960us 00:07:41.686 99.90000% : 30045.735us 00:07:41.686 99.99000% : 30650.683us 00:07:41.686 99.99900% : 30650.683us 00:07:41.686 99.99990% : 30650.683us 00:07:41.686 99.99999% : 30650.683us 00:07:41.686 00:07:41.686 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:41.686 ================================================================================= 00:07:41.686 1.00000% : 5847.828us 00:07:41.686 10.00000% : 6200.714us 00:07:41.686 25.00000% : 6377.157us 00:07:41.686 50.00000% : 6604.012us 00:07:41.686 75.00000% : 7057.723us 00:07:41.686 90.00000% : 7662.671us 00:07:41.686 95.00000% : 8267.618us 00:07:41.686 98.00000% : 8721.329us 00:07:41.686 99.00000% : 9628.751us 00:07:41.686 99.50000% : 23088.837us 00:07:41.686 99.90000% : 28634.191us 00:07:41.686 99.99000% : 29239.138us 00:07:41.686 99.99900% : 29239.138us 00:07:41.686 99.99990% : 29239.138us 00:07:41.686 99.99999% : 29239.138us 00:07:41.686 00:07:41.686 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:41.686 ================================================================================= 00:07:41.686 1.00000% : 5822.622us 00:07:41.686 10.00000% : 6200.714us 00:07:41.686 25.00000% : 6377.157us 00:07:41.686 50.00000% : 6604.012us 00:07:41.686 75.00000% : 7057.723us 00:07:41.686 90.00000% : 7662.671us 00:07:41.686 95.00000% : 8166.794us 00:07:41.686 98.00000% : 8670.917us 00:07:41.686 99.00000% : 9477.514us 00:07:41.686 99.50000% : 21576.468us 00:07:41.686 99.90000% : 27424.295us 00:07:41.686 99.99000% : 27827.594us 00:07:41.686 99.99900% : 28029.243us 00:07:41.686 99.99990% : 28029.243us 00:07:41.686 99.99999% : 28029.243us 00:07:41.686 00:07:41.686 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:41.686 ================================================================================= 00:07:41.686 1.00000% : 5822.622us 00:07:41.686 10.00000% : 6200.714us 00:07:41.686 25.00000% : 6377.157us 00:07:41.686 50.00000% : 6604.012us 00:07:41.686 75.00000% : 7057.723us 00:07:41.686 90.00000% : 7662.671us 00:07:41.686 95.00000% : 8166.794us 00:07:41.686 98.00000% : 8721.329us 00:07:41.686 99.00000% : 9427.102us 00:07:41.686 99.50000% : 20164.923us 00:07:41.686 99.90000% : 25811.102us 00:07:41.686 99.99000% : 26416.049us 00:07:41.686 99.99900% : 26416.049us 00:07:41.686 99.99990% : 26416.049us 00:07:41.686 99.99999% : 26416.049us 00:07:41.686 00:07:41.686 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:41.686 ================================================================================= 00:07:41.686 1.00000% : 5822.622us 00:07:41.686 10.00000% : 6200.714us 00:07:41.686 25.00000% : 6377.157us 00:07:41.686 50.00000% : 6604.012us 00:07:41.686 75.00000% : 7057.723us 00:07:41.686 90.00000% : 7662.671us 00:07:41.686 95.00000% : 8166.794us 00:07:41.686 98.00000% : 8872.566us 00:07:41.686 99.00000% : 9477.514us 00:07:41.686 99.50000% : 18955.028us 00:07:41.686 99.90000% : 23088.837us 00:07:41.686 99.99000% : 24500.382us 00:07:41.686 99.99900% : 24500.382us 00:07:41.686 99.99990% : 24500.382us 00:07:41.686 99.99999% : 24500.382us 00:07:41.686 00:07:41.686 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:41.686 ============================================================================== 00:07:41.686 Range in us Cumulative IO count 00:07:41.686 5293.292 - 5318.498: 0.0054% ( 1) 00:07:41.686 5394.117 - 5419.323: 0.0108% ( 1) 00:07:41.686 5419.323 - 5444.529: 0.0162% ( 1) 00:07:41.686 5494.942 - 5520.148: 0.0269% ( 2) 00:07:41.686 5520.148 - 5545.354: 0.0431% ( 3) 00:07:41.686 5545.354 - 5570.560: 0.0539% ( 2) 00:07:41.686 5570.560 - 5595.766: 0.0862% ( 6) 00:07:41.686 5595.766 - 5620.972: 0.1509% ( 12) 00:07:41.686 5620.972 - 5646.178: 0.2532% ( 19) 00:07:41.686 5646.178 - 5671.385: 0.3664% ( 21) 00:07:41.686 5671.385 - 5696.591: 0.5388% ( 32) 00:07:41.686 5696.591 - 5721.797: 0.6950% ( 29) 00:07:41.686 5721.797 - 5747.003: 0.8890% ( 36) 00:07:41.686 5747.003 - 5772.209: 1.1476% ( 48) 00:07:41.686 5772.209 - 5797.415: 1.3847% ( 44) 00:07:41.686 5797.415 - 5822.622: 1.7295% ( 64) 00:07:41.686 5822.622 - 5847.828: 2.1175% ( 72) 00:07:41.686 5847.828 - 5873.034: 2.5216% ( 75) 00:07:41.686 5873.034 - 5898.240: 2.9149% ( 73) 00:07:41.686 5898.240 - 5923.446: 3.4052% ( 91) 00:07:41.686 5923.446 - 5948.652: 3.9547% ( 102) 00:07:41.686 5948.652 - 5973.858: 4.5744% ( 115) 00:07:41.686 5973.858 - 5999.065: 5.1832% ( 113) 00:07:41.686 5999.065 - 6024.271: 5.8351% ( 121) 00:07:41.686 6024.271 - 6049.477: 6.7942% ( 178) 00:07:41.686 6049.477 - 6074.683: 7.7856% ( 184) 00:07:41.686 6074.683 - 6099.889: 8.7554% ( 180) 00:07:41.686 6099.889 - 6125.095: 9.7953% ( 193) 00:07:41.686 6125.095 - 6150.302: 11.1422% ( 250) 00:07:41.686 6150.302 - 6175.508: 12.7478% ( 298) 00:07:41.686 6175.508 - 6200.714: 14.3481% ( 297) 00:07:41.686 6200.714 - 6225.920: 16.2716% ( 357) 00:07:41.686 6225.920 - 6251.126: 18.2920% ( 375) 00:07:41.686 6251.126 - 6276.332: 20.3394% ( 380) 00:07:41.686 6276.332 - 6301.538: 22.3276% ( 369) 00:07:41.686 6301.538 - 6326.745: 24.2295% ( 353) 00:07:41.686 6326.745 - 6351.951: 26.4925% ( 420) 00:07:41.686 6351.951 - 6377.157: 28.9224% ( 451) 00:07:41.686 6377.157 - 6402.363: 31.2392% ( 430) 00:07:41.686 6402.363 - 6427.569: 33.6584% ( 449) 00:07:41.686 6427.569 - 6452.775: 35.8782% ( 412) 00:07:41.686 6452.775 - 6503.188: 40.8136% ( 916) 00:07:41.686 6503.188 - 6553.600: 45.1724% ( 809) 00:07:41.686 6553.600 - 6604.012: 49.3103% ( 768) 00:07:41.686 6604.012 - 6654.425: 53.3621% ( 752) 00:07:41.686 6654.425 - 6704.837: 56.9450% ( 665) 00:07:41.686 6704.837 - 6755.249: 60.6088% ( 680) 00:07:41.686 6755.249 - 6805.662: 63.6315% ( 561) 00:07:41.686 6805.662 - 6856.074: 66.5086% ( 534) 00:07:41.686 6856.074 - 6906.486: 69.0140% ( 465) 00:07:41.686 6906.486 - 6956.898: 71.4170% ( 446) 00:07:41.686 6956.898 - 7007.311: 73.5075% ( 388) 00:07:41.686 7007.311 - 7057.723: 75.4634% ( 363) 00:07:41.686 7057.723 - 7108.135: 77.2468% ( 331) 00:07:41.686 7108.135 - 7158.548: 78.6961% ( 269) 00:07:41.686 7158.548 - 7208.960: 80.0916% ( 259) 00:07:41.686 7208.960 - 7259.372: 81.5625% ( 273) 00:07:41.686 7259.372 - 7309.785: 83.0334% ( 273) 00:07:41.686 7309.785 - 7360.197: 84.3481% ( 244) 00:07:41.686 7360.197 - 7410.609: 85.4634% ( 207) 00:07:41.686 7410.609 - 7461.022: 86.5571% ( 203) 00:07:41.686 7461.022 - 7511.434: 87.6724% ( 207) 00:07:41.686 7511.434 - 7561.846: 88.5345% ( 160) 00:07:41.686 7561.846 - 7612.258: 89.3534% ( 152) 00:07:41.686 7612.258 - 7662.671: 90.1886% ( 155) 00:07:41.686 7662.671 - 7713.083: 90.8836% ( 129) 00:07:41.686 7713.083 - 7763.495: 91.4386% ( 103) 00:07:41.686 7763.495 - 7813.908: 92.0097% ( 106) 00:07:41.686 7813.908 - 7864.320: 92.7047% ( 129) 00:07:41.686 7864.320 - 7914.732: 93.0657% ( 67) 00:07:41.686 7914.732 - 7965.145: 93.4321% ( 68) 00:07:41.686 7965.145 - 8015.557: 93.9332% ( 93) 00:07:41.686 8015.557 - 8065.969: 94.4127% ( 89) 00:07:41.686 8065.969 - 8116.382: 94.7522% ( 63) 00:07:41.686 8116.382 - 8166.794: 95.1239% ( 69) 00:07:41.686 8166.794 - 8217.206: 95.3933% ( 50) 00:07:41.686 8217.206 - 8267.618: 95.7220% ( 61) 00:07:41.686 8267.618 - 8318.031: 96.0830% ( 67) 00:07:41.686 8318.031 - 8368.443: 96.3093% ( 42) 00:07:41.686 8368.443 - 8418.855: 96.5463% ( 44) 00:07:41.686 8418.855 - 8469.268: 96.7080% ( 30) 00:07:41.686 8469.268 - 8519.680: 96.8642% ( 29) 00:07:41.686 8519.680 - 8570.092: 97.0366% ( 32) 00:07:41.686 8570.092 - 8620.505: 97.1875% ( 28) 00:07:41.686 8620.505 - 8670.917: 97.3330% ( 27) 00:07:41.686 8670.917 - 8721.329: 97.4838% ( 28) 00:07:41.686 8721.329 - 8771.742: 97.6185% ( 25) 00:07:41.686 8771.742 - 8822.154: 97.7532% ( 25) 00:07:41.686 8822.154 - 8872.566: 97.8987% ( 27) 00:07:41.686 8872.566 - 8922.978: 98.0550% ( 29) 00:07:41.686 8922.978 - 8973.391: 98.1843% ( 24) 00:07:41.686 8973.391 - 9023.803: 98.2866% ( 19) 00:07:41.687 9023.803 - 9074.215: 98.4159% ( 24) 00:07:41.687 9074.215 - 9124.628: 98.5345% ( 22) 00:07:41.687 9124.628 - 9175.040: 98.6045% ( 13) 00:07:41.687 9175.040 - 9225.452: 98.6692% ( 12) 00:07:41.687 9225.452 - 9275.865: 98.6961% ( 5) 00:07:41.687 9275.865 - 9326.277: 98.7231% ( 5) 00:07:41.687 9326.277 - 9376.689: 98.7446% ( 4) 00:07:41.687 9376.689 - 9427.102: 98.7823% ( 7) 00:07:41.687 9427.102 - 9477.514: 98.8093% ( 5) 00:07:41.687 9477.514 - 9527.926: 98.8308% ( 4) 00:07:41.687 9527.926 - 9578.338: 98.8524% ( 4) 00:07:41.687 9578.338 - 9628.751: 98.8793% ( 5) 00:07:41.687 9628.751 - 9679.163: 98.9062% ( 5) 00:07:41.687 9679.163 - 9729.575: 98.9170% ( 2) 00:07:41.687 9729.575 - 9779.988: 98.9332% ( 3) 00:07:41.687 9779.988 - 9830.400: 98.9494% ( 3) 00:07:41.687 9830.400 - 9880.812: 98.9655% ( 3) 00:07:41.687 10536.172 - 10586.585: 98.9763% ( 2) 00:07:41.687 10636.997 - 10687.409: 99.0032% ( 5) 00:07:41.687 10687.409 - 10737.822: 99.0140% ( 2) 00:07:41.687 10737.822 - 10788.234: 99.0356% ( 4) 00:07:41.687 10788.234 - 10838.646: 99.0463% ( 2) 00:07:41.687 10838.646 - 10889.058: 99.0625% ( 3) 00:07:41.687 10889.058 - 10939.471: 99.0679% ( 1) 00:07:41.687 10939.471 - 10989.883: 99.0787% ( 2) 00:07:41.687 10989.883 - 11040.295: 99.0948% ( 3) 00:07:41.687 11040.295 - 11090.708: 99.1002% ( 1) 00:07:41.687 11090.708 - 11141.120: 99.1110% ( 2) 00:07:41.687 11141.120 - 11191.532: 99.1272% ( 3) 00:07:41.687 11191.532 - 11241.945: 99.1379% ( 2) 00:07:41.687 11241.945 - 11292.357: 99.1487% ( 2) 00:07:41.687 11292.357 - 11342.769: 99.1595% ( 2) 00:07:41.687 11342.769 - 11393.182: 99.1703% ( 2) 00:07:41.687 11393.182 - 11443.594: 99.1810% ( 2) 00:07:41.687 11443.594 - 11494.006: 99.1864% ( 1) 00:07:41.687 11494.006 - 11544.418: 99.2026% ( 3) 00:07:41.687 11544.418 - 11594.831: 99.2134% ( 2) 00:07:41.687 11594.831 - 11645.243: 99.2241% ( 2) 00:07:41.687 11645.243 - 11695.655: 99.2403% ( 3) 00:07:41.687 11695.655 - 11746.068: 99.2511% ( 2) 00:07:41.687 11746.068 - 11796.480: 99.2672% ( 3) 00:07:41.687 11796.480 - 11846.892: 99.2780% ( 2) 00:07:41.687 11846.892 - 11897.305: 99.2888% ( 2) 00:07:41.687 11897.305 - 11947.717: 99.2996% ( 2) 00:07:41.687 11947.717 - 11998.129: 99.3050% ( 1) 00:07:41.687 11998.129 - 12048.542: 99.3103% ( 1) 00:07:41.687 23189.662 - 23290.486: 99.3319% ( 4) 00:07:41.687 23290.486 - 23391.311: 99.3912% ( 11) 00:07:41.687 23391.311 - 23492.135: 99.4397% ( 9) 00:07:41.687 23492.135 - 23592.960: 99.4881% ( 9) 00:07:41.687 23592.960 - 23693.785: 99.4989% ( 2) 00:07:41.687 23794.609 - 23895.434: 99.5259% ( 5) 00:07:41.687 23895.434 - 23996.258: 99.5420% ( 3) 00:07:41.687 23996.258 - 24097.083: 99.5636% ( 4) 00:07:41.687 24097.083 - 24197.908: 99.5744% ( 2) 00:07:41.687 24197.908 - 24298.732: 99.5851% ( 2) 00:07:41.687 24298.732 - 24399.557: 99.6013% ( 3) 00:07:41.687 24399.557 - 24500.382: 99.6175% ( 3) 00:07:41.687 24500.382 - 24601.206: 99.6390% ( 4) 00:07:41.687 24601.206 - 24702.031: 99.6552% ( 3) 00:07:41.687 30247.385 - 30449.034: 99.6659% ( 2) 00:07:41.687 30449.034 - 30650.683: 99.7037% ( 7) 00:07:41.687 30650.683 - 30852.332: 99.7414% ( 7) 00:07:41.687 30852.332 - 31053.982: 99.7737% ( 6) 00:07:41.687 31053.982 - 31255.631: 99.8060% ( 6) 00:07:41.687 31255.631 - 31457.280: 99.8438% ( 7) 00:07:41.687 31457.280 - 31658.929: 99.8761% ( 6) 00:07:41.687 31658.929 - 31860.578: 99.9138% ( 7) 00:07:41.687 31860.578 - 32062.228: 99.9461% ( 6) 00:07:41.687 32062.228 - 32263.877: 99.9784% ( 6) 00:07:41.687 32263.877 - 32465.526: 100.0000% ( 4) 00:07:41.687 00:07:41.687 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:41.687 ============================================================================== 00:07:41.687 Range in us Cumulative IO count 00:07:41.687 5570.560 - 5595.766: 0.0108% ( 2) 00:07:41.687 5595.766 - 5620.972: 0.0216% ( 2) 00:07:41.687 5620.972 - 5646.178: 0.0377% ( 3) 00:07:41.687 5646.178 - 5671.385: 0.0647% ( 5) 00:07:41.687 5671.385 - 5696.591: 0.1024% ( 7) 00:07:41.687 5696.591 - 5721.797: 0.2748% ( 32) 00:07:41.687 5721.797 - 5747.003: 0.3125% ( 7) 00:07:41.687 5747.003 - 5772.209: 0.3502% ( 7) 00:07:41.687 5772.209 - 5797.415: 0.4149% ( 12) 00:07:41.687 5797.415 - 5822.622: 0.5065% ( 17) 00:07:41.687 5822.622 - 5847.828: 0.8621% ( 66) 00:07:41.687 5847.828 - 5873.034: 1.1907% ( 61) 00:07:41.687 5873.034 - 5898.240: 1.5302% ( 63) 00:07:41.687 5898.240 - 5923.446: 2.1875% ( 122) 00:07:41.687 5923.446 - 5948.652: 2.4623% ( 51) 00:07:41.687 5948.652 - 5973.858: 2.7371% ( 51) 00:07:41.687 5973.858 - 5999.065: 3.2112% ( 88) 00:07:41.687 5999.065 - 6024.271: 3.7608% ( 102) 00:07:41.687 6024.271 - 6049.477: 4.1810% ( 78) 00:07:41.687 6049.477 - 6074.683: 5.0916% ( 169) 00:07:41.687 6074.683 - 6099.889: 6.0830% ( 184) 00:07:41.687 6099.889 - 6125.095: 6.8696% ( 146) 00:07:41.687 6125.095 - 6150.302: 7.6239% ( 140) 00:07:41.687 6150.302 - 6175.508: 8.7231% ( 204) 00:07:41.687 6175.508 - 6200.714: 10.2155% ( 277) 00:07:41.687 6200.714 - 6225.920: 11.7511% ( 285) 00:07:41.687 6225.920 - 6251.126: 13.2597% ( 280) 00:07:41.687 6251.126 - 6276.332: 16.1746% ( 541) 00:07:41.687 6276.332 - 6301.538: 19.2403% ( 569) 00:07:41.687 6301.538 - 6326.745: 20.8244% ( 294) 00:07:41.687 6326.745 - 6351.951: 23.2974% ( 459) 00:07:41.687 6351.951 - 6377.157: 25.7705% ( 459) 00:07:41.687 6377.157 - 6402.363: 29.4289% ( 679) 00:07:41.687 6402.363 - 6427.569: 32.8017% ( 626) 00:07:41.687 6427.569 - 6452.775: 36.0453% ( 602) 00:07:41.687 6452.775 - 6503.188: 42.3276% ( 1166) 00:07:41.687 6503.188 - 6553.600: 47.0636% ( 879) 00:07:41.687 6553.600 - 6604.012: 53.0550% ( 1112) 00:07:41.687 6604.012 - 6654.425: 57.1067% ( 752) 00:07:41.687 6654.425 - 6704.837: 60.5657% ( 642) 00:07:41.687 6704.837 - 6755.249: 62.7748% ( 410) 00:07:41.687 6755.249 - 6805.662: 65.6088% ( 526) 00:07:41.687 6805.662 - 6856.074: 67.8394% ( 414) 00:07:41.687 6856.074 - 6906.486: 69.8384% ( 371) 00:07:41.687 6906.486 - 6956.898: 71.7026% ( 346) 00:07:41.687 6956.898 - 7007.311: 73.5345% ( 340) 00:07:41.687 7007.311 - 7057.723: 75.7651% ( 414) 00:07:41.687 7057.723 - 7108.135: 77.8017% ( 378) 00:07:41.687 7108.135 - 7158.548: 79.2403% ( 267) 00:07:41.687 7158.548 - 7208.960: 80.8890% ( 306) 00:07:41.687 7208.960 - 7259.372: 82.5377% ( 306) 00:07:41.687 7259.372 - 7309.785: 84.0463% ( 280) 00:07:41.687 7309.785 - 7360.197: 84.9892% ( 175) 00:07:41.687 7360.197 - 7410.609: 86.4224% ( 266) 00:07:41.687 7410.609 - 7461.022: 87.3276% ( 168) 00:07:41.687 7461.022 - 7511.434: 88.6422% ( 244) 00:07:41.687 7511.434 - 7561.846: 89.3966% ( 140) 00:07:41.687 7561.846 - 7612.258: 89.8653% ( 87) 00:07:41.687 7612.258 - 7662.671: 90.2371% ( 69) 00:07:41.687 7662.671 - 7713.083: 90.9537% ( 133) 00:07:41.687 7713.083 - 7763.495: 91.3147% ( 67) 00:07:41.687 7763.495 - 7813.908: 91.7403% ( 79) 00:07:41.687 7813.908 - 7864.320: 92.2468% ( 94) 00:07:41.687 7864.320 - 7914.732: 92.9364% ( 128) 00:07:41.687 7914.732 - 7965.145: 93.7123% ( 144) 00:07:41.687 7965.145 - 8015.557: 94.1218% ( 76) 00:07:41.687 8015.557 - 8065.969: 94.5205% ( 74) 00:07:41.687 8065.969 - 8116.382: 94.9084% ( 72) 00:07:41.687 8116.382 - 8166.794: 95.1778% ( 50) 00:07:41.687 8166.794 - 8217.206: 95.6519% ( 88) 00:07:41.687 8217.206 - 8267.618: 95.9159% ( 49) 00:07:41.687 8267.618 - 8318.031: 96.0938% ( 33) 00:07:41.687 8318.031 - 8368.443: 96.3254% ( 43) 00:07:41.687 8368.443 - 8418.855: 96.5841% ( 48) 00:07:41.687 8418.855 - 8469.268: 96.8427% ( 48) 00:07:41.687 8469.268 - 8519.680: 96.9450% ( 19) 00:07:41.687 8519.680 - 8570.092: 97.0366% ( 17) 00:07:41.687 8570.092 - 8620.505: 97.1013% ( 12) 00:07:41.687 8620.505 - 8670.917: 97.2091% ( 20) 00:07:41.687 8670.917 - 8721.329: 97.4084% ( 37) 00:07:41.687 8721.329 - 8771.742: 97.6886% ( 52) 00:07:41.687 8771.742 - 8822.154: 98.1412% ( 84) 00:07:41.687 8822.154 - 8872.566: 98.3513% ( 39) 00:07:41.687 8872.566 - 8922.978: 98.4752% ( 23) 00:07:41.687 8922.978 - 8973.391: 98.5991% ( 23) 00:07:41.687 8973.391 - 9023.803: 98.6800% ( 15) 00:07:41.687 9023.803 - 9074.215: 98.7500% ( 13) 00:07:41.687 9074.215 - 9124.628: 98.7877% ( 7) 00:07:41.687 9124.628 - 9175.040: 98.8147% ( 5) 00:07:41.687 9175.040 - 9225.452: 98.8470% ( 6) 00:07:41.687 9225.452 - 9275.865: 98.8685% ( 4) 00:07:41.687 9275.865 - 9326.277: 98.8901% ( 4) 00:07:41.687 9326.277 - 9376.689: 98.9170% ( 5) 00:07:41.687 9376.689 - 9427.102: 98.9278% ( 2) 00:07:41.687 9427.102 - 9477.514: 98.9386% ( 2) 00:07:41.687 9477.514 - 9527.926: 98.9547% ( 3) 00:07:41.687 9527.926 - 9578.338: 98.9655% ( 2) 00:07:41.687 9931.225 - 9981.637: 98.9709% ( 1) 00:07:41.687 10082.462 - 10132.874: 99.0086% ( 7) 00:07:41.687 10132.874 - 10183.286: 99.0409% ( 6) 00:07:41.687 10183.286 - 10233.698: 99.0463% ( 1) 00:07:41.687 10233.698 - 10284.111: 99.0625% ( 3) 00:07:41.687 10284.111 - 10334.523: 99.0787% ( 3) 00:07:41.687 10334.523 - 10384.935: 99.0894% ( 2) 00:07:41.687 10384.935 - 10435.348: 99.1379% ( 9) 00:07:41.687 10435.348 - 10485.760: 99.1487% ( 2) 00:07:41.687 10485.760 - 10536.172: 99.1649% ( 3) 00:07:41.687 10536.172 - 10586.585: 99.1756% ( 2) 00:07:41.687 10586.585 - 10636.997: 99.1864% ( 2) 00:07:41.687 10636.997 - 10687.409: 99.2026% ( 3) 00:07:41.687 10687.409 - 10737.822: 99.2134% ( 2) 00:07:41.687 10737.822 - 10788.234: 99.2295% ( 3) 00:07:41.687 10788.234 - 10838.646: 99.2403% ( 2) 00:07:41.687 10838.646 - 10889.058: 99.2511% ( 2) 00:07:41.687 10889.058 - 10939.471: 99.2672% ( 3) 00:07:41.687 10939.471 - 10989.883: 99.2780% ( 2) 00:07:41.687 10989.883 - 11040.295: 99.2888% ( 2) 00:07:41.687 11040.295 - 11090.708: 99.2996% ( 2) 00:07:41.687 11090.708 - 11141.120: 99.3103% ( 2) 00:07:41.687 22988.012 - 23088.837: 99.3211% ( 2) 00:07:41.687 23088.837 - 23189.662: 99.3588% ( 7) 00:07:41.687 23189.662 - 23290.486: 99.3858% ( 5) 00:07:41.687 23290.486 - 23391.311: 99.4504% ( 12) 00:07:41.687 23391.311 - 23492.135: 99.4881% ( 7) 00:07:41.688 23492.135 - 23592.960: 99.5097% ( 4) 00:07:41.688 23592.960 - 23693.785: 99.5259% ( 3) 00:07:41.688 23693.785 - 23794.609: 99.5420% ( 3) 00:07:41.688 23794.609 - 23895.434: 99.5528% ( 2) 00:07:41.688 23895.434 - 23996.258: 99.5690% ( 3) 00:07:41.688 23996.258 - 24097.083: 99.5851% ( 3) 00:07:41.688 24097.083 - 24197.908: 99.6013% ( 3) 00:07:41.688 24197.908 - 24298.732: 99.6121% ( 2) 00:07:41.688 24298.732 - 24399.557: 99.6282% ( 3) 00:07:41.688 24399.557 - 24500.382: 99.6444% ( 3) 00:07:41.688 24500.382 - 24601.206: 99.6552% ( 2) 00:07:41.688 28432.542 - 28634.191: 99.7629% ( 20) 00:07:41.688 28634.191 - 28835.840: 99.8384% ( 14) 00:07:41.688 29440.788 - 29642.437: 99.8491% ( 2) 00:07:41.688 29642.437 - 29844.086: 99.8869% ( 7) 00:07:41.688 29844.086 - 30045.735: 99.9246% ( 7) 00:07:41.688 30045.735 - 30247.385: 99.9569% ( 6) 00:07:41.688 30247.385 - 30449.034: 99.9892% ( 6) 00:07:41.688 30449.034 - 30650.683: 100.0000% ( 2) 00:07:41.688 00:07:41.688 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:41.688 ============================================================================== 00:07:41.688 Range in us Cumulative IO count 00:07:41.688 5520.148 - 5545.354: 0.0108% ( 2) 00:07:41.688 5595.766 - 5620.972: 0.0216% ( 2) 00:07:41.688 5620.972 - 5646.178: 0.0377% ( 3) 00:07:41.688 5646.178 - 5671.385: 0.0485% ( 2) 00:07:41.688 5671.385 - 5696.591: 0.0862% ( 7) 00:07:41.688 5696.591 - 5721.797: 0.1455% ( 11) 00:07:41.688 5721.797 - 5747.003: 0.2155% ( 13) 00:07:41.688 5747.003 - 5772.209: 0.3394% ( 23) 00:07:41.688 5772.209 - 5797.415: 0.5442% ( 38) 00:07:41.688 5797.415 - 5822.622: 0.8244% ( 52) 00:07:41.688 5822.622 - 5847.828: 1.1099% ( 53) 00:07:41.688 5847.828 - 5873.034: 1.3362% ( 42) 00:07:41.688 5873.034 - 5898.240: 1.6164% ( 52) 00:07:41.688 5898.240 - 5923.446: 2.0259% ( 76) 00:07:41.688 5923.446 - 5948.652: 2.6509% ( 116) 00:07:41.688 5948.652 - 5973.858: 3.2920% ( 119) 00:07:41.688 5973.858 - 5999.065: 4.2026% ( 169) 00:07:41.688 5999.065 - 6024.271: 4.6606% ( 85) 00:07:41.688 6024.271 - 6049.477: 5.2155% ( 103) 00:07:41.688 6049.477 - 6074.683: 6.0345% ( 152) 00:07:41.688 6074.683 - 6099.889: 6.7888% ( 140) 00:07:41.688 6099.889 - 6125.095: 7.7694% ( 182) 00:07:41.688 6125.095 - 6150.302: 8.7662% ( 185) 00:07:41.688 6150.302 - 6175.508: 9.9461% ( 219) 00:07:41.688 6175.508 - 6200.714: 11.2985% ( 251) 00:07:41.688 6200.714 - 6225.920: 12.8879% ( 295) 00:07:41.688 6225.920 - 6251.126: 14.5312% ( 305) 00:07:41.688 6251.126 - 6276.332: 17.0690% ( 471) 00:07:41.688 6276.332 - 6301.538: 19.1810% ( 392) 00:07:41.688 6301.538 - 6326.745: 21.7834% ( 483) 00:07:41.688 6326.745 - 6351.951: 24.5312% ( 510) 00:07:41.688 6351.951 - 6377.157: 27.5216% ( 555) 00:07:41.688 6377.157 - 6402.363: 30.4472% ( 543) 00:07:41.688 6402.363 - 6427.569: 33.6369% ( 592) 00:07:41.688 6427.569 - 6452.775: 36.1422% ( 465) 00:07:41.688 6452.775 - 6503.188: 40.7166% ( 849) 00:07:41.688 6503.188 - 6553.600: 46.1584% ( 1010) 00:07:41.688 6553.600 - 6604.012: 51.1476% ( 926) 00:07:41.688 6604.012 - 6654.425: 55.5119% ( 810) 00:07:41.688 6654.425 - 6704.837: 58.9440% ( 637) 00:07:41.688 6704.837 - 6755.249: 62.3330% ( 629) 00:07:41.688 6755.249 - 6805.662: 64.5097% ( 404) 00:07:41.688 6805.662 - 6856.074: 66.6595% ( 399) 00:07:41.688 6856.074 - 6906.486: 69.1218% ( 457) 00:07:41.688 6906.486 - 6956.898: 70.7543% ( 303) 00:07:41.688 6956.898 - 7007.311: 72.3815% ( 302) 00:07:41.688 7007.311 - 7057.723: 75.0970% ( 504) 00:07:41.688 7057.723 - 7108.135: 76.9181% ( 338) 00:07:41.688 7108.135 - 7158.548: 79.2672% ( 436) 00:07:41.688 7158.548 - 7208.960: 81.6487% ( 442) 00:07:41.688 7208.960 - 7259.372: 83.6530% ( 372) 00:07:41.688 7259.372 - 7309.785: 85.1832% ( 284) 00:07:41.688 7309.785 - 7360.197: 86.1584% ( 181) 00:07:41.688 7360.197 - 7410.609: 87.0312% ( 162) 00:07:41.688 7410.609 - 7461.022: 87.8933% ( 160) 00:07:41.688 7461.022 - 7511.434: 88.6422% ( 139) 00:07:41.688 7511.434 - 7561.846: 89.1164% ( 88) 00:07:41.688 7561.846 - 7612.258: 89.7198% ( 112) 00:07:41.688 7612.258 - 7662.671: 90.4849% ( 142) 00:07:41.688 7662.671 - 7713.083: 91.2069% ( 134) 00:07:41.688 7713.083 - 7763.495: 91.6379% ( 80) 00:07:41.688 7763.495 - 7813.908: 92.0690% ( 80) 00:07:41.688 7813.908 - 7864.320: 92.8017% ( 136) 00:07:41.688 7864.320 - 7914.732: 93.2812% ( 89) 00:07:41.688 7914.732 - 7965.145: 93.6584% ( 70) 00:07:41.688 7965.145 - 8015.557: 93.9062% ( 46) 00:07:41.688 8015.557 - 8065.969: 94.3750% ( 87) 00:07:41.688 8065.969 - 8116.382: 94.6282% ( 47) 00:07:41.688 8116.382 - 8166.794: 94.7737% ( 27) 00:07:41.688 8166.794 - 8217.206: 94.9677% ( 36) 00:07:41.688 8217.206 - 8267.618: 95.3772% ( 76) 00:07:41.688 8267.618 - 8318.031: 95.6519% ( 51) 00:07:41.688 8318.031 - 8368.443: 96.1261% ( 88) 00:07:41.688 8368.443 - 8418.855: 96.7888% ( 123) 00:07:41.688 8418.855 - 8469.268: 97.0474% ( 48) 00:07:41.688 8469.268 - 8519.680: 97.3922% ( 64) 00:07:41.688 8519.680 - 8570.092: 97.6886% ( 55) 00:07:41.688 8570.092 - 8620.505: 97.8125% ( 23) 00:07:41.688 8620.505 - 8670.917: 97.9041% ( 17) 00:07:41.688 8670.917 - 8721.329: 98.0065% ( 19) 00:07:41.688 8721.329 - 8771.742: 98.1681% ( 30) 00:07:41.688 8771.742 - 8822.154: 98.3836% ( 40) 00:07:41.688 8822.154 - 8872.566: 98.4968% ( 21) 00:07:41.688 8872.566 - 8922.978: 98.5722% ( 14) 00:07:41.688 8922.978 - 8973.391: 98.6261% ( 10) 00:07:41.688 8973.391 - 9023.803: 98.7554% ( 24) 00:07:41.688 9023.803 - 9074.215: 98.8685% ( 21) 00:07:41.688 9074.215 - 9124.628: 98.9332% ( 12) 00:07:41.688 9124.628 - 9175.040: 98.9440% ( 2) 00:07:41.688 9175.040 - 9225.452: 98.9547% ( 2) 00:07:41.688 9225.452 - 9275.865: 98.9655% ( 2) 00:07:41.688 9527.926 - 9578.338: 98.9709% ( 1) 00:07:41.688 9578.338 - 9628.751: 99.0194% ( 9) 00:07:41.688 9628.751 - 9679.163: 99.0356% ( 3) 00:07:41.688 9679.163 - 9729.575: 99.0625% ( 5) 00:07:41.688 9729.575 - 9779.988: 99.0733% ( 2) 00:07:41.688 9779.988 - 9830.400: 99.0841% ( 2) 00:07:41.688 9830.400 - 9880.812: 99.0894% ( 1) 00:07:41.688 9880.812 - 9931.225: 99.0948% ( 1) 00:07:41.688 9931.225 - 9981.637: 99.1056% ( 2) 00:07:41.688 9981.637 - 10032.049: 99.1164% ( 2) 00:07:41.688 10032.049 - 10082.462: 99.1272% ( 2) 00:07:41.688 10082.462 - 10132.874: 99.1379% ( 2) 00:07:41.688 10132.874 - 10183.286: 99.1487% ( 2) 00:07:41.688 10183.286 - 10233.698: 99.1649% ( 3) 00:07:41.688 10233.698 - 10284.111: 99.1810% ( 3) 00:07:41.688 10284.111 - 10334.523: 99.1972% ( 3) 00:07:41.688 10334.523 - 10384.935: 99.2080% ( 2) 00:07:41.688 10384.935 - 10435.348: 99.2241% ( 3) 00:07:41.688 10435.348 - 10485.760: 99.2403% ( 3) 00:07:41.688 10485.760 - 10536.172: 99.2565% ( 3) 00:07:41.688 10536.172 - 10586.585: 99.2726% ( 3) 00:07:41.688 10586.585 - 10636.997: 99.2888% ( 3) 00:07:41.688 10636.997 - 10687.409: 99.3050% ( 3) 00:07:41.688 10687.409 - 10737.822: 99.3103% ( 1) 00:07:41.688 22383.065 - 22483.889: 99.3427% ( 6) 00:07:41.688 22483.889 - 22584.714: 99.3912% ( 9) 00:07:41.688 22584.714 - 22685.538: 99.4235% ( 6) 00:07:41.688 22685.538 - 22786.363: 99.4558% ( 6) 00:07:41.688 22786.363 - 22887.188: 99.4774% ( 4) 00:07:41.688 22887.188 - 22988.012: 99.4935% ( 3) 00:07:41.688 22988.012 - 23088.837: 99.5097% ( 3) 00:07:41.688 23088.837 - 23189.662: 99.5259% ( 3) 00:07:41.688 23189.662 - 23290.486: 99.5420% ( 3) 00:07:41.688 23290.486 - 23391.311: 99.5582% ( 3) 00:07:41.688 23391.311 - 23492.135: 99.5797% ( 4) 00:07:41.688 23492.135 - 23592.960: 99.5959% ( 3) 00:07:41.688 23592.960 - 23693.785: 99.6121% ( 3) 00:07:41.688 23693.785 - 23794.609: 99.6336% ( 4) 00:07:41.688 23794.609 - 23895.434: 99.6498% ( 3) 00:07:41.688 23895.434 - 23996.258: 99.6552% ( 1) 00:07:41.688 27020.997 - 27222.646: 99.6875% ( 6) 00:07:41.688 27222.646 - 27424.295: 99.7414% ( 10) 00:07:41.688 27424.295 - 27625.945: 99.8222% ( 15) 00:07:41.688 27625.945 - 27827.594: 99.8707% ( 9) 00:07:41.688 27827.594 - 28029.243: 99.8761% ( 1) 00:07:41.688 28432.542 - 28634.191: 99.9030% ( 5) 00:07:41.688 28634.191 - 28835.840: 99.9353% ( 6) 00:07:41.688 28835.840 - 29037.489: 99.9677% ( 6) 00:07:41.688 29037.489 - 29239.138: 100.0000% ( 6) 00:07:41.688 00:07:41.688 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:41.688 ============================================================================== 00:07:41.688 Range in us Cumulative IO count 00:07:41.688 5444.529 - 5469.735: 0.0054% ( 1) 00:07:41.688 5494.942 - 5520.148: 0.0108% ( 1) 00:07:41.688 5545.354 - 5570.560: 0.0162% ( 1) 00:07:41.688 5570.560 - 5595.766: 0.0323% ( 3) 00:07:41.688 5595.766 - 5620.972: 0.0539% ( 4) 00:07:41.688 5620.972 - 5646.178: 0.0808% ( 5) 00:07:41.688 5646.178 - 5671.385: 0.1239% ( 8) 00:07:41.688 5671.385 - 5696.591: 0.1940% ( 13) 00:07:41.688 5696.591 - 5721.797: 0.2909% ( 18) 00:07:41.688 5721.797 - 5747.003: 0.4526% ( 30) 00:07:41.688 5747.003 - 5772.209: 0.5388% ( 16) 00:07:41.688 5772.209 - 5797.415: 0.8190% ( 52) 00:07:41.688 5797.415 - 5822.622: 1.0129% ( 36) 00:07:41.688 5822.622 - 5847.828: 1.2392% ( 42) 00:07:41.688 5847.828 - 5873.034: 1.3901% ( 28) 00:07:41.688 5873.034 - 5898.240: 1.7134% ( 60) 00:07:41.688 5898.240 - 5923.446: 2.0851% ( 69) 00:07:41.688 5923.446 - 5948.652: 2.3006% ( 40) 00:07:41.688 5948.652 - 5973.858: 2.5162% ( 40) 00:07:41.688 5973.858 - 5999.065: 2.8502% ( 62) 00:07:41.688 5999.065 - 6024.271: 3.5614% ( 132) 00:07:41.688 6024.271 - 6049.477: 4.1379% ( 107) 00:07:41.688 6049.477 - 6074.683: 4.6875% ( 102) 00:07:41.688 6074.683 - 6099.889: 5.5603% ( 162) 00:07:41.688 6099.889 - 6125.095: 6.5302% ( 180) 00:07:41.688 6125.095 - 6150.302: 7.8664% ( 248) 00:07:41.688 6150.302 - 6175.508: 9.2134% ( 250) 00:07:41.688 6175.508 - 6200.714: 10.4418% ( 228) 00:07:41.688 6200.714 - 6225.920: 12.2091% ( 328) 00:07:41.688 6225.920 - 6251.126: 14.2834% ( 385) 00:07:41.688 6251.126 - 6276.332: 16.8211% ( 471) 00:07:41.689 6276.332 - 6301.538: 19.0787% ( 419) 00:07:41.689 6301.538 - 6326.745: 21.3470% ( 421) 00:07:41.689 6326.745 - 6351.951: 24.4935% ( 584) 00:07:41.689 6351.951 - 6377.157: 27.3114% ( 523) 00:07:41.689 6377.157 - 6402.363: 29.9892% ( 497) 00:07:41.689 6402.363 - 6427.569: 32.5647% ( 478) 00:07:41.689 6427.569 - 6452.775: 35.5065% ( 546) 00:07:41.689 6452.775 - 6503.188: 40.7489% ( 973) 00:07:41.689 6503.188 - 6553.600: 46.0776% ( 989) 00:07:41.689 6553.600 - 6604.012: 51.9558% ( 1091) 00:07:41.689 6604.012 - 6654.425: 56.1961% ( 787) 00:07:41.689 6654.425 - 6704.837: 59.8060% ( 670) 00:07:41.689 6704.837 - 6755.249: 62.5377% ( 507) 00:07:41.689 6755.249 - 6805.662: 64.7791% ( 416) 00:07:41.689 6805.662 - 6856.074: 67.0097% ( 414) 00:07:41.689 6856.074 - 6906.486: 69.7683% ( 512) 00:07:41.689 6906.486 - 6956.898: 71.6487% ( 349) 00:07:41.689 6956.898 - 7007.311: 73.5776% ( 358) 00:07:41.689 7007.311 - 7057.723: 75.7920% ( 411) 00:07:41.689 7057.723 - 7108.135: 77.6347% ( 342) 00:07:41.689 7108.135 - 7158.548: 79.6659% ( 377) 00:07:41.689 7158.548 - 7208.960: 81.3847% ( 319) 00:07:41.689 7208.960 - 7259.372: 83.0603% ( 311) 00:07:41.689 7259.372 - 7309.785: 84.8222% ( 327) 00:07:41.689 7309.785 - 7360.197: 85.7920% ( 180) 00:07:41.689 7360.197 - 7410.609: 86.7888% ( 185) 00:07:41.689 7410.609 - 7461.022: 87.7425% ( 177) 00:07:41.689 7461.022 - 7511.434: 88.2651% ( 97) 00:07:41.689 7511.434 - 7561.846: 88.9332% ( 124) 00:07:41.689 7561.846 - 7612.258: 89.6875% ( 140) 00:07:41.689 7612.258 - 7662.671: 90.1562% ( 87) 00:07:41.689 7662.671 - 7713.083: 90.8459% ( 128) 00:07:41.689 7713.083 - 7763.495: 91.5787% ( 136) 00:07:41.689 7763.495 - 7813.908: 92.2091% ( 117) 00:07:41.689 7813.908 - 7864.320: 92.7263% ( 96) 00:07:41.689 7864.320 - 7914.732: 93.1304% ( 75) 00:07:41.689 7914.732 - 7965.145: 93.3297% ( 37) 00:07:41.689 7965.145 - 8015.557: 93.7662% ( 81) 00:07:41.689 8015.557 - 8065.969: 94.3050% ( 100) 00:07:41.689 8065.969 - 8116.382: 94.7845% ( 89) 00:07:41.689 8116.382 - 8166.794: 95.0700% ( 53) 00:07:41.689 8166.794 - 8217.206: 95.2694% ( 37) 00:07:41.689 8217.206 - 8267.618: 95.6142% ( 64) 00:07:41.689 8267.618 - 8318.031: 96.1961% ( 108) 00:07:41.689 8318.031 - 8368.443: 96.7080% ( 95) 00:07:41.689 8368.443 - 8418.855: 96.9558% ( 46) 00:07:41.689 8418.855 - 8469.268: 97.1929% ( 44) 00:07:41.689 8469.268 - 8519.680: 97.3222% ( 24) 00:07:41.689 8519.680 - 8570.092: 97.5754% ( 47) 00:07:41.689 8570.092 - 8620.505: 97.9795% ( 75) 00:07:41.689 8620.505 - 8670.917: 98.1250% ( 27) 00:07:41.689 8670.917 - 8721.329: 98.2274% ( 19) 00:07:41.689 8721.329 - 8771.742: 98.3028% ( 14) 00:07:41.689 8771.742 - 8822.154: 98.5237% ( 41) 00:07:41.689 8822.154 - 8872.566: 98.5614% ( 7) 00:07:41.689 8872.566 - 8922.978: 98.5884% ( 5) 00:07:41.689 8922.978 - 8973.391: 98.6207% ( 6) 00:07:41.689 8973.391 - 9023.803: 98.6476% ( 5) 00:07:41.689 9023.803 - 9074.215: 98.6746% ( 5) 00:07:41.689 9074.215 - 9124.628: 98.6800% ( 1) 00:07:41.689 9124.628 - 9175.040: 98.6961% ( 3) 00:07:41.689 9175.040 - 9225.452: 98.7392% ( 8) 00:07:41.689 9225.452 - 9275.865: 98.9116% ( 32) 00:07:41.689 9275.865 - 9326.277: 98.9386% ( 5) 00:07:41.689 9326.277 - 9376.689: 98.9655% ( 5) 00:07:41.689 9376.689 - 9427.102: 98.9978% ( 6) 00:07:41.689 9427.102 - 9477.514: 99.0356% ( 7) 00:07:41.689 9477.514 - 9527.926: 99.0517% ( 3) 00:07:41.689 9527.926 - 9578.338: 99.0841% ( 6) 00:07:41.689 9578.338 - 9628.751: 99.1649% ( 15) 00:07:41.689 9628.751 - 9679.163: 99.1864% ( 4) 00:07:41.689 9679.163 - 9729.575: 99.1972% ( 2) 00:07:41.689 9729.575 - 9779.988: 99.2080% ( 2) 00:07:41.689 9779.988 - 9830.400: 99.2241% ( 3) 00:07:41.689 9830.400 - 9880.812: 99.2349% ( 2) 00:07:41.689 9880.812 - 9931.225: 99.2511% ( 3) 00:07:41.689 9931.225 - 9981.637: 99.2619% ( 2) 00:07:41.689 9981.637 - 10032.049: 99.2726% ( 2) 00:07:41.689 10032.049 - 10082.462: 99.2888% ( 3) 00:07:41.689 10082.462 - 10132.874: 99.2996% ( 2) 00:07:41.689 10132.874 - 10183.286: 99.3103% ( 2) 00:07:41.689 20971.520 - 21072.345: 99.3642% ( 10) 00:07:41.689 21072.345 - 21173.169: 99.4073% ( 8) 00:07:41.689 21173.169 - 21273.994: 99.4558% ( 9) 00:07:41.689 21273.994 - 21374.818: 99.4828% ( 5) 00:07:41.689 21374.818 - 21475.643: 99.4989% ( 3) 00:07:41.689 21475.643 - 21576.468: 99.5097% ( 2) 00:07:41.689 21576.468 - 21677.292: 99.5312% ( 4) 00:07:41.689 21677.292 - 21778.117: 99.5474% ( 3) 00:07:41.689 21778.117 - 21878.942: 99.5636% ( 3) 00:07:41.689 21878.942 - 21979.766: 99.5797% ( 3) 00:07:41.689 21979.766 - 22080.591: 99.5959% ( 3) 00:07:41.689 22080.591 - 22181.415: 99.6175% ( 4) 00:07:41.689 22181.415 - 22282.240: 99.6336% ( 3) 00:07:41.689 22282.240 - 22383.065: 99.6498% ( 3) 00:07:41.689 22383.065 - 22483.889: 99.6552% ( 1) 00:07:41.689 25609.452 - 25710.277: 99.6713% ( 3) 00:07:41.689 25710.277 - 25811.102: 99.6983% ( 5) 00:07:41.689 25811.102 - 26012.751: 99.8006% ( 19) 00:07:41.689 26012.751 - 26214.400: 99.8060% ( 1) 00:07:41.689 26617.698 - 26819.348: 99.8222% ( 3) 00:07:41.689 26819.348 - 27020.997: 99.8545% ( 6) 00:07:41.689 27020.997 - 27222.646: 99.8922% ( 7) 00:07:41.689 27222.646 - 27424.295: 99.9300% ( 7) 00:07:41.689 27424.295 - 27625.945: 99.9623% ( 6) 00:07:41.689 27625.945 - 27827.594: 99.9946% ( 6) 00:07:41.689 27827.594 - 28029.243: 100.0000% ( 1) 00:07:41.689 00:07:41.689 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:41.689 ============================================================================== 00:07:41.689 Range in us Cumulative IO count 00:07:41.689 5419.323 - 5444.529: 0.0054% ( 1) 00:07:41.689 5570.560 - 5595.766: 0.0162% ( 2) 00:07:41.689 5595.766 - 5620.972: 0.0431% ( 5) 00:07:41.689 5620.972 - 5646.178: 0.1078% ( 12) 00:07:41.689 5646.178 - 5671.385: 0.1778% ( 13) 00:07:41.689 5671.385 - 5696.591: 0.2694% ( 17) 00:07:41.689 5696.591 - 5721.797: 0.4526% ( 34) 00:07:41.689 5721.797 - 5747.003: 0.5442% ( 17) 00:07:41.689 5747.003 - 5772.209: 0.7274% ( 34) 00:07:41.689 5772.209 - 5797.415: 0.9159% ( 35) 00:07:41.689 5797.415 - 5822.622: 1.2284% ( 58) 00:07:41.689 5822.622 - 5847.828: 1.3416% ( 21) 00:07:41.689 5847.828 - 5873.034: 1.5194% ( 33) 00:07:41.689 5873.034 - 5898.240: 1.6325% ( 21) 00:07:41.689 5898.240 - 5923.446: 1.8157% ( 34) 00:07:41.689 5923.446 - 5948.652: 2.1121% ( 55) 00:07:41.689 5948.652 - 5973.858: 2.8179% ( 131) 00:07:41.689 5973.858 - 5999.065: 3.2920% ( 88) 00:07:41.689 5999.065 - 6024.271: 3.8147% ( 97) 00:07:41.689 6024.271 - 6049.477: 4.6013% ( 146) 00:07:41.689 6049.477 - 6074.683: 5.5711% ( 180) 00:07:41.689 6074.683 - 6099.889: 6.3200% ( 139) 00:07:41.689 6099.889 - 6125.095: 7.5162% ( 222) 00:07:41.689 6125.095 - 6150.302: 8.4537% ( 174) 00:07:41.689 6150.302 - 6175.508: 9.4558% ( 186) 00:07:41.689 6175.508 - 6200.714: 10.6897% ( 229) 00:07:41.689 6200.714 - 6225.920: 12.1606% ( 273) 00:07:41.689 6225.920 - 6251.126: 14.2134% ( 381) 00:07:41.689 6251.126 - 6276.332: 16.7942% ( 479) 00:07:41.689 6276.332 - 6301.538: 19.3642% ( 477) 00:07:41.689 6301.538 - 6326.745: 22.0797% ( 504) 00:07:41.689 6326.745 - 6351.951: 24.7037% ( 487) 00:07:41.689 6351.951 - 6377.157: 27.0528% ( 436) 00:07:41.689 6377.157 - 6402.363: 30.2101% ( 586) 00:07:41.689 6402.363 - 6427.569: 32.9472% ( 508) 00:07:41.689 6427.569 - 6452.775: 35.5119% ( 476) 00:07:41.689 6452.775 - 6503.188: 41.0345% ( 1025) 00:07:41.689 6503.188 - 6553.600: 46.4332% ( 1002) 00:07:41.689 6553.600 - 6604.012: 50.8621% ( 822) 00:07:41.689 6604.012 - 6654.425: 55.1778% ( 801) 00:07:41.689 6654.425 - 6704.837: 59.3481% ( 774) 00:07:41.689 6704.837 - 6755.249: 61.7619% ( 448) 00:07:41.689 6755.249 - 6805.662: 64.4720% ( 503) 00:07:41.689 6805.662 - 6856.074: 67.0205% ( 473) 00:07:41.689 6856.074 - 6906.486: 69.2996% ( 423) 00:07:41.689 6906.486 - 6956.898: 71.2716% ( 366) 00:07:41.689 6956.898 - 7007.311: 73.4860% ( 411) 00:07:41.689 7007.311 - 7057.723: 75.4364% ( 362) 00:07:41.689 7057.723 - 7108.135: 77.3060% ( 347) 00:07:41.689 7108.135 - 7158.548: 79.7575% ( 455) 00:07:41.689 7158.548 - 7208.960: 81.6218% ( 346) 00:07:41.689 7208.960 - 7259.372: 83.6207% ( 371) 00:07:41.689 7259.372 - 7309.785: 85.0000% ( 256) 00:07:41.689 7309.785 - 7360.197: 86.2823% ( 238) 00:07:41.689 7360.197 - 7410.609: 87.0905% ( 150) 00:07:41.689 7410.609 - 7461.022: 87.6562% ( 105) 00:07:41.689 7461.022 - 7511.434: 88.3782% ( 134) 00:07:41.689 7511.434 - 7561.846: 88.9332% ( 103) 00:07:41.689 7561.846 - 7612.258: 89.4235% ( 91) 00:07:41.689 7612.258 - 7662.671: 90.2155% ( 147) 00:07:41.689 7662.671 - 7713.083: 90.8244% ( 113) 00:07:41.689 7713.083 - 7763.495: 91.7188% ( 166) 00:07:41.689 7763.495 - 7813.908: 92.2144% ( 92) 00:07:41.689 7813.908 - 7864.320: 92.6131% ( 74) 00:07:41.689 7864.320 - 7914.732: 92.9849% ( 69) 00:07:41.689 7914.732 - 7965.145: 93.3998% ( 77) 00:07:41.689 7965.145 - 8015.557: 93.9978% ( 111) 00:07:41.689 8015.557 - 8065.969: 94.5528% ( 103) 00:07:41.689 8065.969 - 8116.382: 94.8491% ( 55) 00:07:41.689 8116.382 - 8166.794: 95.1239% ( 51) 00:07:41.689 8166.794 - 8217.206: 95.5280% ( 75) 00:07:41.689 8217.206 - 8267.618: 96.0022% ( 88) 00:07:41.689 8267.618 - 8318.031: 96.4817% ( 89) 00:07:41.689 8318.031 - 8368.443: 96.6649% ( 34) 00:07:41.689 8368.443 - 8418.855: 96.8642% ( 37) 00:07:41.689 8418.855 - 8469.268: 97.0797% ( 40) 00:07:41.689 8469.268 - 8519.680: 97.2683% ( 35) 00:07:41.689 8519.680 - 8570.092: 97.4838% ( 40) 00:07:41.689 8570.092 - 8620.505: 97.7101% ( 42) 00:07:41.689 8620.505 - 8670.917: 97.8664% ( 29) 00:07:41.689 8670.917 - 8721.329: 98.0819% ( 40) 00:07:41.689 8721.329 - 8771.742: 98.2220% ( 26) 00:07:41.689 8771.742 - 8822.154: 98.2974% ( 14) 00:07:41.689 8822.154 - 8872.566: 98.3944% ( 18) 00:07:41.689 8872.566 - 8922.978: 98.4914% ( 18) 00:07:41.690 8922.978 - 8973.391: 98.5668% ( 14) 00:07:41.690 8973.391 - 9023.803: 98.6045% ( 7) 00:07:41.690 9023.803 - 9074.215: 98.6369% ( 6) 00:07:41.690 9074.215 - 9124.628: 98.6692% ( 6) 00:07:41.690 9124.628 - 9175.040: 98.6853% ( 3) 00:07:41.690 9175.040 - 9225.452: 98.7338% ( 9) 00:07:41.690 9225.452 - 9275.865: 98.7931% ( 11) 00:07:41.690 9275.865 - 9326.277: 98.8362% ( 8) 00:07:41.690 9326.277 - 9376.689: 98.9332% ( 18) 00:07:41.690 9376.689 - 9427.102: 99.0356% ( 19) 00:07:41.690 9427.102 - 9477.514: 99.1487% ( 21) 00:07:41.690 9477.514 - 9527.926: 99.1703% ( 4) 00:07:41.690 9527.926 - 9578.338: 99.1918% ( 4) 00:07:41.690 9578.338 - 9628.751: 99.2188% ( 5) 00:07:41.690 9628.751 - 9679.163: 99.2403% ( 4) 00:07:41.690 9679.163 - 9729.575: 99.2565% ( 3) 00:07:41.690 9729.575 - 9779.988: 99.2672% ( 2) 00:07:41.690 9779.988 - 9830.400: 99.2780% ( 2) 00:07:41.690 9830.400 - 9880.812: 99.2942% ( 3) 00:07:41.690 9880.812 - 9931.225: 99.3050% ( 2) 00:07:41.690 9931.225 - 9981.637: 99.3103% ( 1) 00:07:41.690 18955.028 - 19055.852: 99.3157% ( 1) 00:07:41.690 19055.852 - 19156.677: 99.3211% ( 1) 00:07:41.690 19660.800 - 19761.625: 99.3750% ( 10) 00:07:41.690 19761.625 - 19862.449: 99.4127% ( 7) 00:07:41.690 19862.449 - 19963.274: 99.4612% ( 9) 00:07:41.690 19963.274 - 20064.098: 99.4935% ( 6) 00:07:41.690 20064.098 - 20164.923: 99.5043% ( 2) 00:07:41.690 20164.923 - 20265.748: 99.5205% ( 3) 00:07:41.690 20265.748 - 20366.572: 99.5366% ( 3) 00:07:41.690 20366.572 - 20467.397: 99.5528% ( 3) 00:07:41.690 20467.397 - 20568.222: 99.5690% ( 3) 00:07:41.690 20568.222 - 20669.046: 99.5851% ( 3) 00:07:41.690 20669.046 - 20769.871: 99.6013% ( 3) 00:07:41.690 20769.871 - 20870.695: 99.6175% ( 3) 00:07:41.690 20870.695 - 20971.520: 99.6336% ( 3) 00:07:41.690 20971.520 - 21072.345: 99.6498% ( 3) 00:07:41.690 21072.345 - 21173.169: 99.6552% ( 1) 00:07:41.690 24097.083 - 24197.908: 99.6929% ( 7) 00:07:41.690 24197.908 - 24298.732: 99.7306% ( 7) 00:07:41.690 24298.732 - 24399.557: 99.7791% ( 9) 00:07:41.690 24399.557 - 24500.382: 99.8060% ( 5) 00:07:41.690 25206.154 - 25306.978: 99.8222% ( 3) 00:07:41.690 25306.978 - 25407.803: 99.8384% ( 3) 00:07:41.690 25407.803 - 25508.628: 99.8599% ( 4) 00:07:41.690 25508.628 - 25609.452: 99.8761% ( 3) 00:07:41.690 25609.452 - 25710.277: 99.8922% ( 3) 00:07:41.690 25710.277 - 25811.102: 99.9084% ( 3) 00:07:41.690 25811.102 - 26012.751: 99.9407% ( 6) 00:07:41.690 26012.751 - 26214.400: 99.9677% ( 5) 00:07:41.690 26214.400 - 26416.049: 100.0000% ( 6) 00:07:41.690 00:07:41.690 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:41.690 ============================================================================== 00:07:41.690 Range in us Cumulative IO count 00:07:41.690 5444.529 - 5469.735: 0.0054% ( 1) 00:07:41.690 5494.942 - 5520.148: 0.0108% ( 1) 00:07:41.690 5545.354 - 5570.560: 0.0269% ( 3) 00:07:41.690 5570.560 - 5595.766: 0.0431% ( 3) 00:07:41.690 5595.766 - 5620.972: 0.0539% ( 2) 00:07:41.690 5620.972 - 5646.178: 0.0916% ( 7) 00:07:41.690 5646.178 - 5671.385: 0.1562% ( 12) 00:07:41.690 5671.385 - 5696.591: 0.2101% ( 10) 00:07:41.690 5696.591 - 5721.797: 0.2802% ( 13) 00:07:41.690 5721.797 - 5747.003: 0.4741% ( 36) 00:07:41.690 5747.003 - 5772.209: 0.5442% ( 13) 00:07:41.690 5772.209 - 5797.415: 0.8890% ( 64) 00:07:41.690 5797.415 - 5822.622: 1.1961% ( 57) 00:07:41.690 5822.622 - 5847.828: 1.3685% ( 32) 00:07:41.690 5847.828 - 5873.034: 1.4925% ( 23) 00:07:41.690 5873.034 - 5898.240: 1.6379% ( 27) 00:07:41.690 5898.240 - 5923.446: 2.0420% ( 75) 00:07:41.690 5923.446 - 5948.652: 2.4838% ( 82) 00:07:41.690 5948.652 - 5973.858: 2.7963% ( 58) 00:07:41.690 5973.858 - 5999.065: 3.3297% ( 99) 00:07:41.690 5999.065 - 6024.271: 4.0194% ( 128) 00:07:41.690 6024.271 - 6049.477: 4.4612% ( 82) 00:07:41.690 6049.477 - 6074.683: 5.3718% ( 169) 00:07:41.690 6074.683 - 6099.889: 6.3631% ( 184) 00:07:41.690 6099.889 - 6125.095: 7.3168% ( 177) 00:07:41.690 6125.095 - 6150.302: 8.2004% ( 164) 00:07:41.690 6150.302 - 6175.508: 9.3966% ( 222) 00:07:41.690 6175.508 - 6200.714: 10.7058% ( 243) 00:07:41.690 6200.714 - 6225.920: 12.3384% ( 303) 00:07:41.690 6225.920 - 6251.126: 14.5420% ( 409) 00:07:41.690 6251.126 - 6276.332: 16.7942% ( 418) 00:07:41.690 6276.332 - 6301.538: 19.1972% ( 446) 00:07:41.690 6301.538 - 6326.745: 21.9181% ( 505) 00:07:41.690 6326.745 - 6351.951: 24.4343% ( 467) 00:07:41.690 6351.951 - 6377.157: 27.0636% ( 488) 00:07:41.690 6377.157 - 6402.363: 30.6088% ( 658) 00:07:41.690 6402.363 - 6427.569: 33.7985% ( 592) 00:07:41.690 6427.569 - 6452.775: 36.0291% ( 414) 00:07:41.690 6452.775 - 6503.188: 41.6918% ( 1051) 00:07:41.690 6503.188 - 6553.600: 47.1390% ( 1011) 00:07:41.690 6553.600 - 6604.012: 51.8966% ( 883) 00:07:41.690 6604.012 - 6654.425: 55.8836% ( 740) 00:07:41.690 6654.425 - 6704.837: 58.9817% ( 575) 00:07:41.690 6704.837 - 6755.249: 61.1746% ( 407) 00:07:41.690 6755.249 - 6805.662: 64.5797% ( 632) 00:07:41.690 6805.662 - 6856.074: 66.8319% ( 418) 00:07:41.690 6856.074 - 6906.486: 68.7769% ( 361) 00:07:41.690 6906.486 - 6956.898: 70.5442% ( 328) 00:07:41.690 6956.898 - 7007.311: 72.7856% ( 416) 00:07:41.690 7007.311 - 7057.723: 75.4256% ( 490) 00:07:41.690 7057.723 - 7108.135: 77.2683% ( 342) 00:07:41.690 7108.135 - 7158.548: 79.6175% ( 436) 00:07:41.690 7158.548 - 7208.960: 81.5733% ( 363) 00:07:41.690 7208.960 - 7259.372: 83.4213% ( 343) 00:07:41.690 7259.372 - 7309.785: 85.0808% ( 308) 00:07:41.690 7309.785 - 7360.197: 86.3308% ( 232) 00:07:41.690 7360.197 - 7410.609: 87.3653% ( 192) 00:07:41.690 7410.609 - 7461.022: 88.0927% ( 135) 00:07:41.690 7461.022 - 7511.434: 88.7338% ( 119) 00:07:41.690 7511.434 - 7561.846: 89.2780% ( 101) 00:07:41.690 7561.846 - 7612.258: 89.8330% ( 103) 00:07:41.690 7612.258 - 7662.671: 90.3071% ( 88) 00:07:41.690 7662.671 - 7713.083: 90.7328% ( 79) 00:07:41.690 7713.083 - 7763.495: 91.2823% ( 102) 00:07:41.690 7763.495 - 7813.908: 92.0205% ( 137) 00:07:41.690 7813.908 - 7864.320: 92.6509% ( 117) 00:07:41.690 7864.320 - 7914.732: 93.1088% ( 85) 00:07:41.690 7914.732 - 7965.145: 93.6800% ( 106) 00:07:41.690 7965.145 - 8015.557: 94.1541% ( 88) 00:07:41.690 8015.557 - 8065.969: 94.6228% ( 87) 00:07:41.690 8065.969 - 8116.382: 94.9946% ( 69) 00:07:41.690 8116.382 - 8166.794: 95.3287% ( 62) 00:07:41.690 8166.794 - 8217.206: 95.8244% ( 92) 00:07:41.690 8217.206 - 8267.618: 96.1853% ( 67) 00:07:41.690 8267.618 - 8318.031: 96.4386% ( 47) 00:07:41.690 8318.031 - 8368.443: 96.7403% ( 56) 00:07:41.690 8368.443 - 8418.855: 96.9397% ( 37) 00:07:41.690 8418.855 - 8469.268: 97.1552% ( 40) 00:07:41.690 8469.268 - 8519.680: 97.2360% ( 15) 00:07:41.690 8519.680 - 8570.092: 97.3222% ( 16) 00:07:41.690 8570.092 - 8620.505: 97.4300% ( 20) 00:07:41.690 8620.505 - 8670.917: 97.5485% ( 22) 00:07:41.690 8670.917 - 8721.329: 97.6670% ( 22) 00:07:41.690 8721.329 - 8771.742: 97.7694% ( 19) 00:07:41.690 8771.742 - 8822.154: 97.8772% ( 20) 00:07:41.690 8822.154 - 8872.566: 98.1627% ( 53) 00:07:41.690 8872.566 - 8922.978: 98.2328% ( 13) 00:07:41.690 8922.978 - 8973.391: 98.3459% ( 21) 00:07:41.690 8973.391 - 9023.803: 98.4914% ( 27) 00:07:41.690 9023.803 - 9074.215: 98.5399% ( 9) 00:07:41.690 9074.215 - 9124.628: 98.5506% ( 2) 00:07:41.690 9124.628 - 9175.040: 98.5776% ( 5) 00:07:41.690 9175.040 - 9225.452: 98.6746% ( 18) 00:07:41.690 9225.452 - 9275.865: 98.7392% ( 12) 00:07:41.690 9275.865 - 9326.277: 98.8254% ( 16) 00:07:41.690 9326.277 - 9376.689: 98.9224% ( 18) 00:07:41.690 9376.689 - 9427.102: 98.9871% ( 12) 00:07:41.690 9427.102 - 9477.514: 99.0733% ( 16) 00:07:41.690 9477.514 - 9527.926: 99.1487% ( 14) 00:07:41.690 9527.926 - 9578.338: 99.1649% ( 3) 00:07:41.690 9578.338 - 9628.751: 99.1918% ( 5) 00:07:41.690 9628.751 - 9679.163: 99.2188% ( 5) 00:07:41.690 9679.163 - 9729.575: 99.2349% ( 3) 00:07:41.690 9729.575 - 9779.988: 99.2619% ( 5) 00:07:41.690 9779.988 - 9830.400: 99.2780% ( 3) 00:07:41.690 9830.400 - 9880.812: 99.2996% ( 4) 00:07:41.690 9880.812 - 9931.225: 99.3103% ( 2) 00:07:41.690 17442.658 - 17543.483: 99.3157% ( 1) 00:07:41.690 18551.729 - 18652.554: 99.3427% ( 5) 00:07:41.690 18652.554 - 18753.378: 99.4019% ( 11) 00:07:41.690 18753.378 - 18854.203: 99.4612% ( 11) 00:07:41.690 18854.203 - 18955.028: 99.5097% ( 9) 00:07:41.691 18955.028 - 19055.852: 99.5259% ( 3) 00:07:41.691 19055.852 - 19156.677: 99.5366% ( 2) 00:07:41.691 19156.677 - 19257.502: 99.5528% ( 3) 00:07:41.691 19257.502 - 19358.326: 99.5636% ( 2) 00:07:41.691 19358.326 - 19459.151: 99.5797% ( 3) 00:07:41.691 19459.151 - 19559.975: 99.5959% ( 3) 00:07:41.691 19559.975 - 19660.800: 99.6121% ( 3) 00:07:41.691 19660.800 - 19761.625: 99.6282% ( 3) 00:07:41.691 19761.625 - 19862.449: 99.6444% ( 3) 00:07:41.691 19862.449 - 19963.274: 99.6552% ( 2) 00:07:41.691 22685.538 - 22786.363: 99.6821% ( 5) 00:07:41.691 22786.363 - 22887.188: 99.8060% ( 23) 00:07:41.691 22887.188 - 22988.012: 99.8222% ( 3) 00:07:41.691 22988.012 - 23088.837: 99.9030% ( 15) 00:07:41.691 23088.837 - 23189.662: 99.9138% ( 2) 00:07:41.691 23189.662 - 23290.486: 99.9246% ( 2) 00:07:41.691 23290.486 - 23391.311: 99.9300% ( 1) 00:07:41.691 23996.258 - 24097.083: 99.9353% ( 1) 00:07:41.691 24097.083 - 24197.908: 99.9515% ( 3) 00:07:41.691 24197.908 - 24298.732: 99.9677% ( 3) 00:07:41.691 24298.732 - 24399.557: 99.9838% ( 3) 00:07:41.691 24399.557 - 24500.382: 100.0000% ( 3) 00:07:41.691 00:07:41.691 ************************************ 00:07:41.691 END TEST nvme_perf 00:07:41.691 ************************************ 00:07:41.691 17:49:00 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:41.691 00:07:41.691 real 0m2.494s 00:07:41.691 user 0m2.215s 00:07:41.691 sys 0m0.189s 00:07:41.691 17:49:00 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.691 17:49:00 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:41.949 17:49:00 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:41.949 17:49:00 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:41.949 17:49:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.949 17:49:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.949 ************************************ 00:07:41.949 START TEST nvme_hello_world 00:07:41.949 ************************************ 00:07:41.949 17:49:00 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:41.949 Initializing NVMe Controllers 00:07:41.949 Attached to 0000:00:10.0 00:07:41.949 Namespace ID: 1 size: 6GB 00:07:41.949 Attached to 0000:00:11.0 00:07:41.949 Namespace ID: 1 size: 5GB 00:07:41.949 Attached to 0000:00:13.0 00:07:41.949 Namespace ID: 1 size: 1GB 00:07:41.949 Attached to 0000:00:12.0 00:07:41.949 Namespace ID: 1 size: 4GB 00:07:41.949 Namespace ID: 2 size: 4GB 00:07:41.949 Namespace ID: 3 size: 4GB 00:07:41.949 Initialization complete. 00:07:41.949 INFO: using host memory buffer for IO 00:07:41.949 Hello world! 00:07:41.949 INFO: using host memory buffer for IO 00:07:41.949 Hello world! 00:07:41.949 INFO: using host memory buffer for IO 00:07:41.949 Hello world! 00:07:41.949 INFO: using host memory buffer for IO 00:07:41.949 Hello world! 00:07:41.949 INFO: using host memory buffer for IO 00:07:41.949 Hello world! 00:07:41.949 INFO: using host memory buffer for IO 00:07:41.949 Hello world! 00:07:41.949 ************************************ 00:07:41.949 END TEST nvme_hello_world 00:07:41.949 ************************************ 00:07:41.949 00:07:41.949 real 0m0.223s 00:07:41.949 user 0m0.080s 00:07:41.949 sys 0m0.100s 00:07:41.949 17:49:00 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.949 17:49:00 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.949 17:49:00 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:41.949 17:49:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.949 17:49:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.949 17:49:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.208 ************************************ 00:07:42.208 START TEST nvme_sgl 00:07:42.208 ************************************ 00:07:42.208 17:49:00 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:42.208 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:42.208 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:42.208 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:42.208 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:42.208 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:42.208 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:42.208 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:42.208 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:42.208 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:42.208 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:42.208 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:42.208 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:42.208 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:42.208 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:42.466 NVMe Readv/Writev Request test 00:07:42.466 Attached to 0000:00:10.0 00:07:42.466 Attached to 0000:00:11.0 00:07:42.466 Attached to 0000:00:13.0 00:07:42.466 Attached to 0000:00:12.0 00:07:42.466 0000:00:10.0: build_io_request_2 test passed 00:07:42.466 0000:00:10.0: build_io_request_4 test passed 00:07:42.466 0000:00:10.0: build_io_request_5 test passed 00:07:42.466 0000:00:10.0: build_io_request_6 test passed 00:07:42.466 0000:00:10.0: build_io_request_7 test passed 00:07:42.466 0000:00:10.0: build_io_request_10 test passed 00:07:42.466 0000:00:11.0: build_io_request_2 test passed 00:07:42.466 0000:00:11.0: build_io_request_4 test passed 00:07:42.466 0000:00:11.0: build_io_request_5 test passed 00:07:42.466 0000:00:11.0: build_io_request_6 test passed 00:07:42.466 0000:00:11.0: build_io_request_7 test passed 00:07:42.466 0000:00:11.0: build_io_request_10 test passed 00:07:42.466 Cleaning up... 00:07:42.466 ************************************ 00:07:42.466 END TEST nvme_sgl 00:07:42.466 ************************************ 00:07:42.466 00:07:42.466 real 0m0.280s 00:07:42.466 user 0m0.137s 00:07:42.466 sys 0m0.095s 00:07:42.466 17:49:00 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.466 17:49:00 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:42.466 17:49:00 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:42.466 17:49:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.466 17:49:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.466 17:49:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.466 ************************************ 00:07:42.466 START TEST nvme_e2edp 00:07:42.466 ************************************ 00:07:42.466 17:49:00 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:42.466 NVMe Write/Read with End-to-End data protection test 00:07:42.466 Attached to 0000:00:10.0 00:07:42.466 Attached to 0000:00:11.0 00:07:42.466 Attached to 0000:00:13.0 00:07:42.466 Attached to 0000:00:12.0 00:07:42.466 Cleaning up... 00:07:42.724 ************************************ 00:07:42.724 END TEST nvme_e2edp 00:07:42.724 ************************************ 00:07:42.724 00:07:42.724 real 0m0.202s 00:07:42.724 user 0m0.064s 00:07:42.724 sys 0m0.096s 00:07:42.724 17:49:00 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.724 17:49:00 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:42.724 17:49:00 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:42.724 17:49:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.724 17:49:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.724 17:49:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.724 ************************************ 00:07:42.724 START TEST nvme_reserve 00:07:42.724 ************************************ 00:07:42.724 17:49:00 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:42.724 ===================================================== 00:07:42.724 NVMe Controller at PCI bus 0, device 16, function 0 00:07:42.724 ===================================================== 00:07:42.724 Reservations: Not Supported 00:07:42.724 ===================================================== 00:07:42.724 NVMe Controller at PCI bus 0, device 17, function 0 00:07:42.724 ===================================================== 00:07:42.724 Reservations: Not Supported 00:07:42.724 ===================================================== 00:07:42.724 NVMe Controller at PCI bus 0, device 19, function 0 00:07:42.724 ===================================================== 00:07:42.724 Reservations: Not Supported 00:07:42.724 ===================================================== 00:07:42.724 NVMe Controller at PCI bus 0, device 18, function 0 00:07:42.724 ===================================================== 00:07:42.724 Reservations: Not Supported 00:07:42.724 Reservation test passed 00:07:42.724 ************************************ 00:07:42.724 END TEST nvme_reserve 00:07:42.724 ************************************ 00:07:42.724 00:07:42.724 real 0m0.211s 00:07:42.724 user 0m0.059s 00:07:42.724 sys 0m0.103s 00:07:42.724 17:49:01 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.724 17:49:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:42.982 17:49:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:42.982 17:49:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.982 17:49:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.982 17:49:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.982 ************************************ 00:07:42.982 START TEST nvme_err_injection 00:07:42.982 ************************************ 00:07:42.982 17:49:01 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:42.982 NVMe Error Injection test 00:07:42.982 Attached to 0000:00:10.0 00:07:42.982 Attached to 0000:00:11.0 00:07:42.982 Attached to 0000:00:13.0 00:07:42.982 Attached to 0000:00:12.0 00:07:42.982 0000:00:11.0: get features failed as expected 00:07:42.982 0000:00:13.0: get features failed as expected 00:07:42.982 0000:00:12.0: get features failed as expected 00:07:42.982 0000:00:10.0: get features failed as expected 00:07:42.982 0000:00:10.0: get features successfully as expected 00:07:42.982 0000:00:11.0: get features successfully as expected 00:07:42.982 0000:00:13.0: get features successfully as expected 00:07:42.982 0000:00:12.0: get features successfully as expected 00:07:42.982 0000:00:10.0: read failed as expected 00:07:42.982 0000:00:11.0: read failed as expected 00:07:42.982 0000:00:13.0: read failed as expected 00:07:42.982 0000:00:12.0: read failed as expected 00:07:42.982 0000:00:10.0: read successfully as expected 00:07:42.982 0000:00:11.0: read successfully as expected 00:07:42.982 0000:00:13.0: read successfully as expected 00:07:42.982 0000:00:12.0: read successfully as expected 00:07:42.982 Cleaning up... 00:07:42.982 00:07:42.982 real 0m0.225s 00:07:42.982 user 0m0.079s 00:07:42.982 sys 0m0.102s 00:07:42.982 ************************************ 00:07:42.982 END TEST nvme_err_injection 00:07:42.982 ************************************ 00:07:42.982 17:49:01 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.982 17:49:01 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:43.240 17:49:01 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:43.240 17:49:01 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:43.240 17:49:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.240 17:49:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.240 ************************************ 00:07:43.240 START TEST nvme_overhead 00:07:43.240 ************************************ 00:07:43.240 17:49:01 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:44.612 Initializing NVMe Controllers 00:07:44.612 Attached to 0000:00:10.0 00:07:44.612 Attached to 0000:00:11.0 00:07:44.612 Attached to 0000:00:13.0 00:07:44.612 Attached to 0000:00:12.0 00:07:44.612 Initialization complete. Launching workers. 00:07:44.612 submit (in ns) avg, min, max = 11376.1, 9849.2, 186296.9 00:07:44.612 complete (in ns) avg, min, max = 7643.4, 7299.2, 331018.5 00:07:44.612 00:07:44.612 Submit histogram 00:07:44.612 ================ 00:07:44.612 Range in us Cumulative Count 00:07:44.612 9.846 - 9.895: 0.0062% ( 1) 00:07:44.612 10.585 - 10.634: 0.0124% ( 1) 00:07:44.612 10.634 - 10.683: 0.0186% ( 1) 00:07:44.612 10.683 - 10.732: 0.0310% ( 2) 00:07:44.612 10.732 - 10.782: 0.0931% ( 10) 00:07:44.612 10.782 - 10.831: 0.7761% ( 110) 00:07:44.612 10.831 - 10.880: 3.4705% ( 434) 00:07:44.612 10.880 - 10.929: 9.6418% ( 994) 00:07:44.612 10.929 - 10.978: 20.3824% ( 1730) 00:07:44.612 10.978 - 11.028: 35.0841% ( 2368) 00:07:44.612 11.028 - 11.077: 49.1836% ( 2271) 00:07:44.612 11.077 - 11.126: 61.1722% ( 1931) 00:07:44.612 11.126 - 11.175: 69.0321% ( 1266) 00:07:44.612 11.175 - 11.225: 73.5581% ( 729) 00:07:44.612 11.225 - 11.274: 76.0477% ( 401) 00:07:44.612 11.274 - 11.323: 77.6495% ( 258) 00:07:44.612 11.323 - 11.372: 78.7111% ( 171) 00:07:44.612 11.372 - 11.422: 79.6921% ( 158) 00:07:44.612 11.422 - 11.471: 80.5426% ( 137) 00:07:44.612 11.471 - 11.520: 81.5174% ( 157) 00:07:44.612 11.520 - 11.569: 82.4921% ( 157) 00:07:44.612 11.569 - 11.618: 83.1936% ( 113) 00:07:44.612 11.618 - 11.668: 83.9262% ( 118) 00:07:44.612 11.668 - 11.717: 84.4602% ( 86) 00:07:44.612 11.717 - 11.766: 85.1369% ( 109) 00:07:44.612 11.766 - 11.815: 85.6646% ( 85) 00:07:44.612 11.815 - 11.865: 86.3972% ( 118) 00:07:44.612 11.865 - 11.914: 87.0615% ( 107) 00:07:44.612 11.914 - 11.963: 87.9183% ( 138) 00:07:44.612 11.963 - 12.012: 89.0110% ( 176) 00:07:44.612 12.012 - 12.062: 90.4700% ( 235) 00:07:44.612 12.062 - 12.111: 91.9662% ( 241) 00:07:44.612 12.111 - 12.160: 93.3693% ( 226) 00:07:44.612 12.160 - 12.209: 94.3006% ( 150) 00:07:44.612 12.209 - 12.258: 95.0580% ( 122) 00:07:44.612 12.258 - 12.308: 95.5982% ( 87) 00:07:44.612 12.308 - 12.357: 95.9024% ( 49) 00:07:44.612 12.357 - 12.406: 96.1818% ( 45) 00:07:44.612 12.406 - 12.455: 96.3680% ( 30) 00:07:44.612 12.455 - 12.505: 96.4425% ( 12) 00:07:44.612 12.505 - 12.554: 96.5170% ( 12) 00:07:44.612 12.554 - 12.603: 96.5419% ( 4) 00:07:44.612 12.603 - 12.702: 96.5729% ( 5) 00:07:44.612 12.702 - 12.800: 96.6164% ( 7) 00:07:44.612 12.800 - 12.898: 96.7219% ( 17) 00:07:44.612 12.898 - 12.997: 96.8709% ( 24) 00:07:44.612 12.997 - 13.095: 97.0137% ( 23) 00:07:44.612 13.095 - 13.194: 97.1689% ( 25) 00:07:44.612 13.194 - 13.292: 97.2745% ( 17) 00:07:44.612 13.292 - 13.391: 97.3304% ( 9) 00:07:44.612 13.391 - 13.489: 97.3490% ( 3) 00:07:44.612 13.489 - 13.588: 97.3800% ( 5) 00:07:44.612 13.588 - 13.686: 97.4111% ( 5) 00:07:44.612 13.686 - 13.785: 97.4669% ( 9) 00:07:44.612 13.785 - 13.883: 97.5042% ( 6) 00:07:44.612 13.883 - 13.982: 97.5228% ( 3) 00:07:44.613 13.982 - 14.080: 97.5414% ( 3) 00:07:44.613 14.080 - 14.178: 97.5973% ( 9) 00:07:44.613 14.178 - 14.277: 97.6780% ( 13) 00:07:44.613 14.277 - 14.375: 97.7649% ( 14) 00:07:44.613 14.375 - 14.474: 97.9264% ( 26) 00:07:44.613 14.474 - 14.572: 98.0257% ( 16) 00:07:44.613 14.572 - 14.671: 98.1499% ( 20) 00:07:44.613 14.671 - 14.769: 98.1995% ( 8) 00:07:44.613 14.769 - 14.868: 98.2368% ( 6) 00:07:44.613 14.868 - 14.966: 98.2616% ( 4) 00:07:44.613 14.966 - 15.065: 98.2865% ( 4) 00:07:44.613 15.065 - 15.163: 98.3113% ( 4) 00:07:44.613 15.163 - 15.262: 98.3423% ( 5) 00:07:44.613 15.262 - 15.360: 98.3548% ( 2) 00:07:44.613 15.360 - 15.458: 98.3796% ( 4) 00:07:44.613 15.458 - 15.557: 98.3920% ( 2) 00:07:44.613 15.557 - 15.655: 98.4106% ( 3) 00:07:44.613 15.655 - 15.754: 98.4168% ( 1) 00:07:44.613 15.754 - 15.852: 98.4417% ( 4) 00:07:44.613 15.852 - 15.951: 98.4541% ( 2) 00:07:44.613 15.951 - 16.049: 98.4665% ( 2) 00:07:44.613 16.049 - 16.148: 98.4851% ( 3) 00:07:44.613 16.148 - 16.246: 98.4913% ( 1) 00:07:44.613 16.246 - 16.345: 98.5100% ( 3) 00:07:44.613 16.345 - 16.443: 98.5224% ( 2) 00:07:44.613 16.443 - 16.542: 98.5720% ( 8) 00:07:44.613 16.542 - 16.640: 98.6528% ( 13) 00:07:44.613 16.640 - 16.738: 98.7893% ( 22) 00:07:44.613 16.738 - 16.837: 98.9383% ( 24) 00:07:44.613 16.837 - 16.935: 99.0439% ( 17) 00:07:44.613 16.935 - 17.034: 99.1494% ( 17) 00:07:44.613 17.034 - 17.132: 99.2550% ( 17) 00:07:44.613 17.132 - 17.231: 99.3357% ( 13) 00:07:44.613 17.231 - 17.329: 99.3854% ( 8) 00:07:44.613 17.329 - 17.428: 99.4350% ( 8) 00:07:44.613 17.428 - 17.526: 99.4909% ( 9) 00:07:44.613 17.526 - 17.625: 99.5219% ( 5) 00:07:44.613 17.625 - 17.723: 99.5592% ( 6) 00:07:44.613 17.723 - 17.822: 99.5716% ( 2) 00:07:44.613 17.822 - 17.920: 99.5964% ( 4) 00:07:44.613 17.920 - 18.018: 99.6461% ( 8) 00:07:44.613 18.018 - 18.117: 99.6834% ( 6) 00:07:44.613 18.117 - 18.215: 99.7144% ( 5) 00:07:44.613 18.215 - 18.314: 99.7330% ( 3) 00:07:44.613 18.314 - 18.412: 99.7641% ( 5) 00:07:44.613 18.412 - 18.511: 99.7951% ( 5) 00:07:44.613 18.511 - 18.609: 99.8013% ( 1) 00:07:44.613 18.708 - 18.806: 99.8137% ( 2) 00:07:44.613 18.806 - 18.905: 99.8200% ( 1) 00:07:44.613 18.905 - 19.003: 99.8262% ( 1) 00:07:44.613 19.003 - 19.102: 99.8324% ( 1) 00:07:44.613 19.102 - 19.200: 99.8386% ( 1) 00:07:44.613 19.298 - 19.397: 99.8510% ( 2) 00:07:44.613 19.397 - 19.495: 99.8572% ( 1) 00:07:44.613 19.594 - 19.692: 99.8634% ( 1) 00:07:44.613 20.086 - 20.185: 99.8696% ( 1) 00:07:44.613 20.382 - 20.480: 99.8820% ( 2) 00:07:44.613 20.677 - 20.775: 99.8882% ( 1) 00:07:44.613 21.071 - 21.169: 99.9007% ( 2) 00:07:44.613 21.169 - 21.268: 99.9131% ( 2) 00:07:44.613 22.252 - 22.351: 99.9193% ( 1) 00:07:44.613 22.745 - 22.843: 99.9255% ( 1) 00:07:44.613 25.108 - 25.206: 99.9317% ( 1) 00:07:44.613 26.978 - 27.175: 99.9379% ( 1) 00:07:44.613 28.948 - 29.145: 99.9441% ( 1) 00:07:44.613 30.326 - 30.523: 99.9503% ( 1) 00:07:44.613 36.234 - 36.431: 99.9565% ( 1) 00:07:44.613 39.582 - 39.778: 99.9627% ( 1) 00:07:44.613 40.763 - 40.960: 99.9690% ( 1) 00:07:44.613 42.929 - 43.126: 99.9752% ( 1) 00:07:44.613 44.308 - 44.505: 99.9814% ( 1) 00:07:44.613 48.837 - 49.034: 99.9876% ( 1) 00:07:44.613 57.895 - 58.289: 99.9938% ( 1) 00:07:44.613 185.895 - 186.683: 100.0000% ( 1) 00:07:44.613 00:07:44.613 Complete histogram 00:07:44.613 ================== 00:07:44.613 Range in us Cumulative Count 00:07:44.613 7.286 - 7.335: 0.0745% ( 12) 00:07:44.613 7.335 - 7.385: 1.4714% ( 225) 00:07:44.613 7.385 - 7.434: 11.9513% ( 1688) 00:07:44.613 7.434 - 7.483: 38.7782% ( 4321) 00:07:44.613 7.483 - 7.532: 65.3070% ( 4273) 00:07:44.613 7.532 - 7.582: 81.3373% ( 2582) 00:07:44.613 7.582 - 7.631: 89.9112% ( 1381) 00:07:44.613 7.631 - 7.680: 93.8288% ( 631) 00:07:44.613 7.680 - 7.729: 95.9521% ( 342) 00:07:44.613 7.729 - 7.778: 97.0323% ( 174) 00:07:44.613 7.778 - 7.828: 97.5787% ( 88) 00:07:44.613 7.828 - 7.877: 97.7277% ( 24) 00:07:44.613 7.877 - 7.926: 97.8394% ( 18) 00:07:44.613 7.926 - 7.975: 97.8953% ( 9) 00:07:44.613 7.975 - 8.025: 97.9140% ( 3) 00:07:44.613 8.025 - 8.074: 97.9388% ( 4) 00:07:44.613 8.074 - 8.123: 97.9636% ( 4) 00:07:44.613 8.172 - 8.222: 97.9822% ( 3) 00:07:44.613 8.222 - 8.271: 97.9885% ( 1) 00:07:44.613 8.271 - 8.320: 97.9947% ( 1) 00:07:44.613 8.369 - 8.418: 98.0009% ( 1) 00:07:44.613 8.418 - 8.468: 98.0071% ( 1) 00:07:44.613 8.468 - 8.517: 98.0133% ( 1) 00:07:44.613 8.517 - 8.566: 98.0319% ( 3) 00:07:44.613 8.566 - 8.615: 98.0381% ( 1) 00:07:44.613 8.665 - 8.714: 98.0505% ( 2) 00:07:44.613 8.911 - 8.960: 98.0567% ( 1) 00:07:44.613 9.600 - 9.649: 98.0692% ( 2) 00:07:44.613 9.649 - 9.698: 98.0878% ( 3) 00:07:44.613 9.698 - 9.748: 98.1375% ( 8) 00:07:44.613 9.748 - 9.797: 98.2182% ( 13) 00:07:44.613 9.797 - 9.846: 98.3175% ( 16) 00:07:44.613 9.846 - 9.895: 98.3734% ( 9) 00:07:44.613 9.895 - 9.945: 98.3982% ( 4) 00:07:44.613 9.945 - 9.994: 98.4106% ( 2) 00:07:44.613 9.994 - 10.043: 98.4230% ( 2) 00:07:44.613 10.043 - 10.092: 98.4665% ( 7) 00:07:44.613 10.092 - 10.142: 98.5038% ( 6) 00:07:44.613 10.142 - 10.191: 98.5224% ( 3) 00:07:44.613 10.191 - 10.240: 98.5348% ( 2) 00:07:44.613 10.486 - 10.535: 98.5472% ( 2) 00:07:44.613 10.535 - 10.585: 98.5534% ( 1) 00:07:44.613 10.585 - 10.634: 98.5596% ( 1) 00:07:44.613 10.880 - 10.929: 98.5658% ( 1) 00:07:44.613 10.929 - 10.978: 98.5720% ( 1) 00:07:44.613 11.028 - 11.077: 98.5845% ( 2) 00:07:44.613 11.471 - 11.520: 98.5907% ( 1) 00:07:44.613 11.618 - 11.668: 98.5969% ( 1) 00:07:44.613 11.668 - 11.717: 98.6031% ( 1) 00:07:44.613 11.717 - 11.766: 98.6093% ( 1) 00:07:44.613 11.766 - 11.815: 98.6155% ( 1) 00:07:44.613 11.963 - 12.012: 98.6279% ( 2) 00:07:44.613 12.258 - 12.308: 98.6341% ( 1) 00:07:44.613 12.308 - 12.357: 98.6403% ( 1) 00:07:44.613 12.357 - 12.406: 98.6466% ( 1) 00:07:44.613 12.505 - 12.554: 98.6528% ( 1) 00:07:44.613 12.603 - 12.702: 98.6590% ( 1) 00:07:44.613 12.702 - 12.800: 98.6776% ( 3) 00:07:44.613 12.800 - 12.898: 98.7211% ( 7) 00:07:44.613 12.898 - 12.997: 98.7583% ( 6) 00:07:44.613 12.997 - 13.095: 98.8576% ( 16) 00:07:44.613 13.095 - 13.194: 98.9942% ( 22) 00:07:44.613 13.194 - 13.292: 99.1556% ( 26) 00:07:44.613 13.292 - 13.391: 99.2364% ( 13) 00:07:44.613 13.391 - 13.489: 99.3481% ( 18) 00:07:44.613 13.489 - 13.588: 99.4288% ( 13) 00:07:44.613 13.588 - 13.686: 99.4971% ( 11) 00:07:44.613 13.686 - 13.785: 99.5654% ( 11) 00:07:44.613 13.785 - 13.883: 99.6337% ( 11) 00:07:44.613 13.883 - 13.982: 99.6834% ( 8) 00:07:44.613 13.982 - 14.080: 99.7020% ( 3) 00:07:44.613 14.080 - 14.178: 99.7330% ( 5) 00:07:44.613 14.178 - 14.277: 99.7579% ( 4) 00:07:44.613 14.277 - 14.375: 99.7827% ( 4) 00:07:44.613 14.375 - 14.474: 99.8013% ( 3) 00:07:44.613 14.474 - 14.572: 99.8075% ( 1) 00:07:44.613 14.572 - 14.671: 99.8200% ( 2) 00:07:44.613 14.671 - 14.769: 99.8262% ( 1) 00:07:44.613 14.868 - 14.966: 99.8324% ( 1) 00:07:44.613 14.966 - 15.065: 99.8386% ( 1) 00:07:44.613 15.360 - 15.458: 99.8510% ( 2) 00:07:44.613 15.458 - 15.557: 99.8572% ( 1) 00:07:44.613 15.754 - 15.852: 99.8634% ( 1) 00:07:44.613 16.443 - 16.542: 99.8696% ( 1) 00:07:44.613 16.542 - 16.640: 99.8758% ( 1) 00:07:44.613 16.640 - 16.738: 99.8882% ( 2) 00:07:44.613 16.837 - 16.935: 99.8945% ( 1) 00:07:44.613 17.034 - 17.132: 99.9069% ( 2) 00:07:44.613 17.428 - 17.526: 99.9193% ( 2) 00:07:44.613 17.625 - 17.723: 99.9255% ( 1) 00:07:44.613 17.723 - 17.822: 99.9317% ( 1) 00:07:44.613 18.412 - 18.511: 99.9379% ( 1) 00:07:44.613 18.905 - 19.003: 99.9441% ( 1) 00:07:44.613 19.003 - 19.102: 99.9503% ( 1) 00:07:44.613 20.382 - 20.480: 99.9565% ( 1) 00:07:44.613 21.366 - 21.465: 99.9627% ( 1) 00:07:44.613 25.206 - 25.403: 99.9690% ( 1) 00:07:44.613 25.994 - 26.191: 99.9752% ( 1) 00:07:44.613 31.508 - 31.705: 99.9814% ( 1) 00:07:44.613 50.412 - 50.806: 99.9876% ( 1) 00:07:44.613 57.108 - 57.502: 99.9938% ( 1) 00:07:44.613 330.831 - 332.406: 100.0000% ( 1) 00:07:44.613 00:07:44.613 ************************************ 00:07:44.613 END TEST nvme_overhead 00:07:44.613 ************************************ 00:07:44.613 00:07:44.613 real 0m1.209s 00:07:44.613 user 0m1.066s 00:07:44.613 sys 0m0.094s 00:07:44.613 17:49:02 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.613 17:49:02 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:44.613 17:49:02 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:44.613 17:49:02 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:44.613 17:49:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.613 17:49:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.613 ************************************ 00:07:44.613 START TEST nvme_arbitration 00:07:44.613 ************************************ 00:07:44.614 17:49:02 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:47.894 Initializing NVMe Controllers 00:07:47.894 Attached to 0000:00:10.0 00:07:47.894 Attached to 0000:00:11.0 00:07:47.894 Attached to 0000:00:13.0 00:07:47.894 Attached to 0000:00:12.0 00:07:47.894 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:47.894 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:47.894 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:47.894 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:47.894 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:47.894 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:47.894 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:47.894 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:47.894 Initialization complete. Launching workers. 00:07:47.894 Starting thread on core 1 with urgent priority queue 00:07:47.894 Starting thread on core 2 with urgent priority queue 00:07:47.894 Starting thread on core 3 with urgent priority queue 00:07:47.894 Starting thread on core 0 with urgent priority queue 00:07:47.894 QEMU NVMe Ctrl (12340 ) core 0: 981.33 IO/s 101.90 secs/100000 ios 00:07:47.894 QEMU NVMe Ctrl (12342 ) core 0: 981.33 IO/s 101.90 secs/100000 ios 00:07:47.894 QEMU NVMe Ctrl (12341 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:47.894 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:47.894 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:07:47.894 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:47.894 ======================================================== 00:07:47.894 00:07:47.894 ************************************ 00:07:47.894 END TEST nvme_arbitration 00:07:47.894 ************************************ 00:07:47.894 00:07:47.894 real 0m3.300s 00:07:47.894 user 0m9.203s 00:07:47.894 sys 0m0.117s 00:07:47.894 17:49:05 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.894 17:49:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:47.894 17:49:06 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:47.894 17:49:06 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:47.894 17:49:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.894 17:49:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.894 ************************************ 00:07:47.894 START TEST nvme_single_aen 00:07:47.894 ************************************ 00:07:47.894 17:49:06 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:47.894 Asynchronous Event Request test 00:07:47.894 Attached to 0000:00:10.0 00:07:47.894 Attached to 0000:00:11.0 00:07:47.894 Attached to 0000:00:13.0 00:07:47.894 Attached to 0000:00:12.0 00:07:47.894 Reset controller to setup AER completions for this process 00:07:47.894 Registering asynchronous event callbacks... 00:07:47.894 Getting orig temperature thresholds of all controllers 00:07:47.894 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:47.894 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:47.894 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:47.894 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:47.894 Setting all controllers temperature threshold low to trigger AER 00:07:47.894 Waiting for all controllers temperature threshold to be set lower 00:07:47.894 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:47.894 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:47.894 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:47.894 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:47.894 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:47.894 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:47.894 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:47.894 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:47.894 Waiting for all controllers to trigger AER and reset threshold 00:07:47.894 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.894 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.894 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.894 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.894 Cleaning up... 00:07:47.894 00:07:47.894 real 0m0.223s 00:07:47.894 user 0m0.074s 00:07:47.894 sys 0m0.101s 00:07:47.894 ************************************ 00:07:47.894 END TEST nvme_single_aen 00:07:47.894 ************************************ 00:07:47.894 17:49:06 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.894 17:49:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:47.895 17:49:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:47.895 17:49:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.895 17:49:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.895 17:49:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.895 ************************************ 00:07:47.895 START TEST nvme_doorbell_aers 00:07:47.895 ************************************ 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # bdfs=() 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # local bdfs 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.895 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:07:48.153 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:07:48.153 17:49:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:48.153 17:49:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:48.153 17:49:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:48.153 [2024-10-25 17:49:06.523986] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:07:58.129 Executing: test_write_invalid_db 00:07:58.129 Waiting for AER completion... 00:07:58.129 Failure: test_write_invalid_db 00:07:58.129 00:07:58.129 Executing: test_invalid_db_write_overflow_sq 00:07:58.129 Waiting for AER completion... 00:07:58.129 Failure: test_invalid_db_write_overflow_sq 00:07:58.129 00:07:58.129 Executing: test_invalid_db_write_overflow_cq 00:07:58.129 Waiting for AER completion... 00:07:58.129 Failure: test_invalid_db_write_overflow_cq 00:07:58.129 00:07:58.129 17:49:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:58.129 17:49:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:58.387 [2024-10-25 17:49:16.576296] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:08.355 Executing: test_write_invalid_db 00:08:08.355 Waiting for AER completion... 00:08:08.355 Failure: test_write_invalid_db 00:08:08.355 00:08:08.355 Executing: test_invalid_db_write_overflow_sq 00:08:08.355 Waiting for AER completion... 00:08:08.355 Failure: test_invalid_db_write_overflow_sq 00:08:08.355 00:08:08.355 Executing: test_invalid_db_write_overflow_cq 00:08:08.355 Waiting for AER completion... 00:08:08.355 Failure: test_invalid_db_write_overflow_cq 00:08:08.355 00:08:08.355 17:49:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:08.355 17:49:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:08.355 [2024-10-25 17:49:26.632963] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:18.369 Executing: test_write_invalid_db 00:08:18.369 Waiting for AER completion... 00:08:18.369 Failure: test_write_invalid_db 00:08:18.369 00:08:18.369 Executing: test_invalid_db_write_overflow_sq 00:08:18.369 Waiting for AER completion... 00:08:18.369 Failure: test_invalid_db_write_overflow_sq 00:08:18.369 00:08:18.369 Executing: test_invalid_db_write_overflow_cq 00:08:18.369 Waiting for AER completion... 00:08:18.369 Failure: test_invalid_db_write_overflow_cq 00:08:18.369 00:08:18.369 17:49:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:18.369 17:49:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:18.369 [2024-10-25 17:49:36.640917] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.353 Executing: test_write_invalid_db 00:08:28.353 Waiting for AER completion... 00:08:28.353 Failure: test_write_invalid_db 00:08:28.353 00:08:28.353 Executing: test_invalid_db_write_overflow_sq 00:08:28.353 Waiting for AER completion... 00:08:28.353 Failure: test_invalid_db_write_overflow_sq 00:08:28.353 00:08:28.353 Executing: test_invalid_db_write_overflow_cq 00:08:28.353 Waiting for AER completion... 00:08:28.353 Failure: test_invalid_db_write_overflow_cq 00:08:28.354 00:08:28.354 00:08:28.354 real 0m40.196s 00:08:28.354 user 0m34.274s 00:08:28.354 sys 0m5.554s 00:08:28.354 ************************************ 00:08:28.354 END TEST nvme_doorbell_aers 00:08:28.354 ************************************ 00:08:28.354 17:49:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.354 17:49:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 17:49:46 nvme -- nvme/nvme.sh@97 -- # uname 00:08:28.354 17:49:46 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:28.354 17:49:46 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:28.354 17:49:46 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:28.354 17:49:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.354 17:49:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.354 ************************************ 00:08:28.354 START TEST nvme_multi_aen 00:08:28.354 ************************************ 00:08:28.354 17:49:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:28.354 [2024-10-25 17:49:46.713778] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.713844] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.713855] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.715155] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.715183] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.715191] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.716603] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.716736] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.716797] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.717853] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.717955] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 [2024-10-25 17:49:46.718025] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63135) is not found. Dropping the request. 00:08:28.354 Child process pid: 63657 00:08:28.641 [Child] Asynchronous Event Request test 00:08:28.641 [Child] Attached to 0000:00:10.0 00:08:28.641 [Child] Attached to 0000:00:11.0 00:08:28.641 [Child] Attached to 0000:00:13.0 00:08:28.641 [Child] Attached to 0000:00:12.0 00:08:28.641 [Child] Registering asynchronous event callbacks... 00:08:28.641 [Child] Getting orig temperature thresholds of all controllers 00:08:28.641 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:28.641 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 [Child] Cleaning up... 00:08:28.641 Asynchronous Event Request test 00:08:28.641 Attached to 0000:00:10.0 00:08:28.641 Attached to 0000:00:11.0 00:08:28.641 Attached to 0000:00:13.0 00:08:28.641 Attached to 0000:00:12.0 00:08:28.641 Reset controller to setup AER completions for this process 00:08:28.641 Registering asynchronous event callbacks... 00:08:28.641 Getting orig temperature thresholds of all controllers 00:08:28.641 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.641 Setting all controllers temperature threshold low to trigger AER 00:08:28.641 Waiting for all controllers temperature threshold to be set lower 00:08:28.641 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:28.641 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:28.641 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:28.641 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.641 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:28.641 Waiting for all controllers to trigger AER and reset threshold 00:08:28.641 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.641 Cleaning up... 00:08:28.641 ************************************ 00:08:28.641 END TEST nvme_multi_aen 00:08:28.641 ************************************ 00:08:28.641 00:08:28.641 real 0m0.451s 00:08:28.641 user 0m0.131s 00:08:28.641 sys 0m0.199s 00:08:28.641 17:49:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.641 17:49:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:28.641 17:49:47 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:28.641 17:49:47 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:28.641 17:49:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.641 17:49:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.641 ************************************ 00:08:28.641 START TEST nvme_startup 00:08:28.641 ************************************ 00:08:28.641 17:49:47 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:28.900 Initializing NVMe Controllers 00:08:28.900 Attached to 0000:00:10.0 00:08:28.900 Attached to 0000:00:11.0 00:08:28.900 Attached to 0000:00:13.0 00:08:28.900 Attached to 0000:00:12.0 00:08:28.900 Initialization complete. 00:08:28.900 Time used:137173.438 (us). 00:08:28.900 00:08:28.900 real 0m0.197s 00:08:28.900 user 0m0.065s 00:08:28.900 sys 0m0.084s 00:08:28.900 ************************************ 00:08:28.900 END TEST nvme_startup 00:08:28.900 ************************************ 00:08:28.900 17:49:47 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.900 17:49:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:28.900 17:49:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:28.900 17:49:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.900 17:49:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.900 17:49:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.900 ************************************ 00:08:28.900 START TEST nvme_multi_secondary 00:08:28.900 ************************************ 00:08:28.900 17:49:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:28.900 17:49:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63713 00:08:28.900 17:49:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:28.900 17:49:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63714 00:08:28.900 17:49:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:28.900 17:49:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:32.183 Initializing NVMe Controllers 00:08:32.183 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.183 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.183 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.183 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.183 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:32.183 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:32.183 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:32.183 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:32.183 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:32.183 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:32.183 Initialization complete. Launching workers. 00:08:32.183 ======================================================== 00:08:32.183 Latency(us) 00:08:32.183 Device Information : IOPS MiB/s Average min max 00:08:32.183 PCIE (0000:00:10.0) NSID 1 from core 1: 7667.86 29.95 2085.29 765.43 7924.18 00:08:32.183 PCIE (0000:00:11.0) NSID 1 from core 1: 7667.86 29.95 2086.39 782.57 7139.63 00:08:32.183 PCIE (0000:00:13.0) NSID 1 from core 1: 7667.86 29.95 2086.44 747.28 7125.48 00:08:32.183 PCIE (0000:00:12.0) NSID 1 from core 1: 7667.86 29.95 2086.39 748.99 6417.40 00:08:32.183 PCIE (0000:00:12.0) NSID 2 from core 1: 7667.86 29.95 2086.42 746.58 6837.12 00:08:32.183 PCIE (0000:00:12.0) NSID 3 from core 1: 7667.86 29.95 2086.42 803.51 7256.50 00:08:32.183 ======================================================== 00:08:32.183 Total : 46007.16 179.72 2086.23 746.58 7924.18 00:08:32.183 00:08:32.441 Initializing NVMe Controllers 00:08:32.441 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.441 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.441 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.441 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.441 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:32.441 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:32.441 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:32.441 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:32.441 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:32.441 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:32.441 Initialization complete. Launching workers. 00:08:32.441 ======================================================== 00:08:32.441 Latency(us) 00:08:32.441 Device Information : IOPS MiB/s Average min max 00:08:32.441 PCIE (0000:00:10.0) NSID 1 from core 2: 3065.06 11.97 5218.44 1121.18 18329.44 00:08:32.441 PCIE (0000:00:11.0) NSID 1 from core 2: 3065.06 11.97 5218.86 1033.17 19412.86 00:08:32.441 PCIE (0000:00:13.0) NSID 1 from core 2: 3065.06 11.97 5219.57 1038.13 17417.80 00:08:32.441 PCIE (0000:00:12.0) NSID 1 from core 2: 3065.06 11.97 5219.44 1079.18 16858.32 00:08:32.441 PCIE (0000:00:12.0) NSID 2 from core 2: 3065.06 11.97 5219.03 1040.72 15131.49 00:08:32.441 PCIE (0000:00:12.0) NSID 3 from core 2: 3065.06 11.97 5218.86 1003.28 15417.69 00:08:32.441 ======================================================== 00:08:32.441 Total : 18390.38 71.84 5219.03 1003.28 19412.86 00:08:32.441 00:08:32.441 17:49:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63713 00:08:34.337 Initializing NVMe Controllers 00:08:34.337 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.338 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.338 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.338 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.338 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:34.338 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:34.338 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:34.338 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:34.338 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:34.338 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:34.338 Initialization complete. Launching workers. 00:08:34.338 ======================================================== 00:08:34.338 Latency(us) 00:08:34.338 Device Information : IOPS MiB/s Average min max 00:08:34.338 PCIE (0000:00:10.0) NSID 1 from core 0: 10966.43 42.84 1457.89 728.58 7548.95 00:08:34.338 PCIE (0000:00:11.0) NSID 1 from core 0: 10966.43 42.84 1458.72 743.89 6601.89 00:08:34.338 PCIE (0000:00:13.0) NSID 1 from core 0: 10966.43 42.84 1458.76 738.83 6425.38 00:08:34.338 PCIE (0000:00:12.0) NSID 1 from core 0: 10966.43 42.84 1458.80 741.71 6502.71 00:08:34.338 PCIE (0000:00:12.0) NSID 2 from core 0: 10966.43 42.84 1458.84 740.40 7363.47 00:08:34.338 PCIE (0000:00:12.0) NSID 3 from core 0: 10966.43 42.84 1458.88 743.96 7788.95 00:08:34.338 ======================================================== 00:08:34.338 Total : 65798.56 257.03 1458.65 728.58 7788.95 00:08:34.338 00:08:34.338 17:49:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63714 00:08:34.338 17:49:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63783 00:08:34.338 17:49:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:34.338 17:49:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63784 00:08:34.338 17:49:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:34.338 17:49:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:37.622 Initializing NVMe Controllers 00:08:37.622 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.622 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.622 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.622 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.622 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:37.622 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:37.622 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:37.622 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:37.622 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:37.622 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:37.622 Initialization complete. Launching workers. 00:08:37.622 ======================================================== 00:08:37.622 Latency(us) 00:08:37.622 Device Information : IOPS MiB/s Average min max 00:08:37.622 PCIE (0000:00:10.0) NSID 1 from core 0: 6519.19 25.47 2452.95 738.55 9171.67 00:08:37.622 PCIE (0000:00:11.0) NSID 1 from core 0: 6519.19 25.47 2454.31 749.61 8356.33 00:08:37.622 PCIE (0000:00:13.0) NSID 1 from core 0: 6519.19 25.47 2454.72 757.52 8180.59 00:08:37.622 PCIE (0000:00:12.0) NSID 1 from core 0: 6519.19 25.47 2454.70 758.23 8108.54 00:08:37.622 PCIE (0000:00:12.0) NSID 2 from core 0: 6519.19 25.47 2455.19 750.85 9101.15 00:08:37.622 PCIE (0000:00:12.0) NSID 3 from core 0: 6519.19 25.47 2455.16 757.85 8866.51 00:08:37.622 ======================================================== 00:08:37.622 Total : 39115.16 152.79 2454.51 738.55 9171.67 00:08:37.622 00:08:37.622 Initializing NVMe Controllers 00:08:37.622 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.622 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.622 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.622 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.622 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:37.622 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:37.622 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:37.622 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:37.622 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:37.622 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:37.622 Initialization complete. Launching workers. 00:08:37.623 ======================================================== 00:08:37.623 Latency(us) 00:08:37.623 Device Information : IOPS MiB/s Average min max 00:08:37.623 PCIE (0000:00:10.0) NSID 1 from core 1: 5448.80 21.28 2934.93 969.95 11227.14 00:08:37.623 PCIE (0000:00:11.0) NSID 1 from core 1: 5448.80 21.28 2936.19 1004.34 12461.31 00:08:37.623 PCIE (0000:00:13.0) NSID 1 from core 1: 5448.80 21.28 2936.16 1033.38 11363.43 00:08:37.623 PCIE (0000:00:12.0) NSID 1 from core 1: 5448.80 21.28 2936.14 1068.76 11660.76 00:08:37.623 PCIE (0000:00:12.0) NSID 2 from core 1: 5448.80 21.28 2936.09 885.31 11523.56 00:08:37.623 PCIE (0000:00:12.0) NSID 3 from core 1: 5448.80 21.28 2936.06 895.10 9722.41 00:08:37.623 ======================================================== 00:08:37.623 Total : 32692.81 127.71 2935.93 885.31 12461.31 00:08:37.623 00:08:39.536 Initializing NVMe Controllers 00:08:39.536 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.536 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.536 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.536 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.536 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:39.536 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:39.536 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:39.536 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:39.536 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:39.536 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:39.536 Initialization complete. Launching workers. 00:08:39.536 ======================================================== 00:08:39.536 Latency(us) 00:08:39.536 Device Information : IOPS MiB/s Average min max 00:08:39.536 PCIE (0000:00:10.0) NSID 1 from core 2: 2559.67 10.00 6249.20 868.85 17994.84 00:08:39.536 PCIE (0000:00:11.0) NSID 1 from core 2: 2559.67 10.00 6250.30 880.41 21493.71 00:08:39.536 PCIE (0000:00:13.0) NSID 1 from core 2: 2559.67 10.00 6249.86 876.82 21920.48 00:08:39.536 PCIE (0000:00:12.0) NSID 1 from core 2: 2559.67 10.00 6249.74 867.09 20261.47 00:08:39.536 PCIE (0000:00:12.0) NSID 2 from core 2: 2559.67 10.00 6249.93 875.75 20640.78 00:08:39.536 PCIE (0000:00:12.0) NSID 3 from core 2: 2559.67 10.00 6250.11 876.79 20450.42 00:08:39.536 ======================================================== 00:08:39.536 Total : 15358.05 59.99 6249.86 867.09 21920.48 00:08:39.536 00:08:39.834 ************************************ 00:08:39.834 END TEST nvme_multi_secondary 00:08:39.834 ************************************ 00:08:39.834 17:49:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63783 00:08:39.834 17:49:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63784 00:08:39.834 00:08:39.834 real 0m10.759s 00:08:39.834 user 0m18.406s 00:08:39.834 sys 0m0.712s 00:08:39.834 17:49:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.834 17:49:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:39.834 17:49:58 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:39.834 17:49:58 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:39.834 17:49:58 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/62744 ]] 00:08:39.834 17:49:58 nvme -- common/autotest_common.sh@1090 -- # kill 62744 00:08:39.834 17:49:58 nvme -- common/autotest_common.sh@1091 -- # wait 62744 00:08:39.834 [2024-10-25 17:49:58.061323] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.061533] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.061586] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.061604] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.063834] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.063886] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.063904] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.063921] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.066082] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.066128] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.066143] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.066159] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.834 [2024-10-25 17:49:58.068345] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.835 [2024-10-25 17:49:58.068464] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.835 [2024-10-25 17:49:58.068487] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.835 [2024-10-25 17:49:58.068498] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63656) is not found. Dropping the request. 00:08:39.835 [2024-10-25 17:49:58.185259] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:39.835 17:49:58 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:39.835 17:49:58 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:39.835 17:49:58 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:39.835 17:49:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.835 17:49:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.835 17:49:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.835 ************************************ 00:08:39.835 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:39.835 ************************************ 00:08:39.835 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:39.835 * Looking for test storage... 00:08:40.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lcov --version 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:40.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.095 --rc genhtml_branch_coverage=1 00:08:40.095 --rc genhtml_function_coverage=1 00:08:40.095 --rc genhtml_legend=1 00:08:40.095 --rc geninfo_all_blocks=1 00:08:40.095 --rc geninfo_unexecuted_blocks=1 00:08:40.095 00:08:40.095 ' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:40.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.095 --rc genhtml_branch_coverage=1 00:08:40.095 --rc genhtml_function_coverage=1 00:08:40.095 --rc genhtml_legend=1 00:08:40.095 --rc geninfo_all_blocks=1 00:08:40.095 --rc geninfo_unexecuted_blocks=1 00:08:40.095 00:08:40.095 ' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:40.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.095 --rc genhtml_branch_coverage=1 00:08:40.095 --rc genhtml_function_coverage=1 00:08:40.095 --rc genhtml_legend=1 00:08:40.095 --rc geninfo_all_blocks=1 00:08:40.095 --rc geninfo_unexecuted_blocks=1 00:08:40.095 00:08:40.095 ' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:40.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.095 --rc genhtml_branch_coverage=1 00:08:40.095 --rc genhtml_function_coverage=1 00:08:40.095 --rc genhtml_legend=1 00:08:40.095 --rc geninfo_all_blocks=1 00:08:40.095 --rc geninfo_unexecuted_blocks=1 00:08:40.095 00:08:40.095 ' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # bdfs=() 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # local bdfs 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # bdfs=() 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # local bdfs 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:40.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63951 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63951 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 63951 ']' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.095 17:49:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.095 [2024-10-25 17:49:58.495742] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:08:40.095 [2024-10-25 17:49:58.495996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63951 ] 00:08:40.355 [2024-10-25 17:49:58.666518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.355 [2024-10-25 17:49:58.786447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.355 [2024-10-25 17:49:58.786779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.355 [2024-10-25 17:49:58.786765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.355 [2024-10-25 17:49:58.786695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.293 nvme0n1 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_tonN0.txt 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.293 true 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1729878599 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63974 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:41.293 17:49:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.221 [2024-10-25 17:50:01.536713] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:43.221 [2024-10-25 17:50:01.537358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:43.221 [2024-10-25 17:50:01.537403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:43.221 [2024-10-25 17:50:01.537415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.221 [2024-10-25 17:50:01.541087] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:43.221 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63974 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63974 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63974 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_tonN0.txt 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_tonN0.txt 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63951 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 63951 ']' 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 63951 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.221 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63951 00:08:43.221 killing process with pid 63951 00:08:43.222 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.222 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.222 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63951' 00:08:43.222 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 63951 00:08:43.222 17:50:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 63951 00:08:44.605 17:50:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:44.605 17:50:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:44.605 00:08:44.605 real 0m4.720s 00:08:44.605 user 0m16.567s 00:08:44.605 sys 0m0.577s 00:08:44.605 ************************************ 00:08:44.605 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:44.605 ************************************ 00:08:44.605 17:50:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.605 17:50:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:44.605 17:50:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:44.605 17:50:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:44.605 17:50:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.605 17:50:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.605 17:50:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.605 ************************************ 00:08:44.605 START TEST nvme_fio 00:08:44.605 ************************************ 00:08:44.605 17:50:02 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:44.605 17:50:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:44.605 17:50:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:44.605 17:50:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:44.605 17:50:02 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # bdfs=() 00:08:44.605 17:50:02 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # local bdfs 00:08:44.605 17:50:02 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:44.605 17:50:02 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:08:44.605 17:50:02 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:44.605 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:08:44.605 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:44.605 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:44.605 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:44.605 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:44.605 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:44.605 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:44.862 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:44.862 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:45.120 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:45.120 17:50:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:45.120 17:50:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:45.378 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.378 fio-3.35 00:08:45.378 Starting 1 thread 00:08:52.027 00:08:52.027 test: (groupid=0, jobs=1): err= 0: pid=64108: Fri Oct 25 17:50:10 2024 00:08:52.027 read: IOPS=23.8k, BW=92.9MiB/s (97.5MB/s)(186MiB/2001msec) 00:08:52.028 slat (usec): min=4, max=308, avg= 4.97, stdev= 2.73 00:08:52.028 clat (usec): min=647, max=9208, avg=2684.58, stdev=722.62 00:08:52.028 lat (usec): min=659, max=9213, avg=2689.54, stdev=723.78 00:08:52.028 clat percentiles (usec): 00:08:52.028 | 1.00th=[ 1860], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2409], 00:08:52.028 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:08:52.028 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2933], 95.00th=[ 3916], 00:08:52.028 | 99.00th=[ 6325], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8586], 00:08:52.028 | 99.99th=[ 8848] 00:08:52.028 bw ( KiB/s): min=90200, max=95720, per=98.21%, avg=93461.33, stdev=2893.37, samples=3 00:08:52.028 iops : min=22550, max=23930, avg=23365.33, stdev=723.34, samples=3 00:08:52.028 write: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(185MiB/2001msec); 0 zone resets 00:08:52.028 slat (usec): min=4, max=113, avg= 5.19, stdev= 2.06 00:08:52.028 clat (usec): min=578, max=9060, avg=2690.19, stdev=723.42 00:08:52.028 lat (usec): min=591, max=9065, avg=2695.38, stdev=724.58 00:08:52.028 clat percentiles (usec): 00:08:52.028 | 1.00th=[ 1876], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2409], 00:08:52.028 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:08:52.028 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2966], 95.00th=[ 3949], 00:08:52.028 | 99.00th=[ 6194], 99.50th=[ 7504], 99.90th=[ 8455], 99.95th=[ 8455], 00:08:52.028 | 99.99th=[ 8979] 00:08:52.028 bw ( KiB/s): min=89592, max=96104, per=98.95%, avg=93565.33, stdev=3485.00, samples=3 00:08:52.028 iops : min=22398, max=24026, avg=23391.33, stdev=871.25, samples=3 00:08:52.028 lat (usec) : 750=0.01%, 1000=0.01% 00:08:52.028 lat (msec) : 2=1.85%, 4=93.48%, 10=4.66% 00:08:52.028 cpu : usr=98.90%, sys=0.05%, ctx=25, majf=0, minf=607 00:08:52.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:52.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.028 issued rwts: total=47607,47302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.028 00:08:52.028 Run status group 0 (all jobs): 00:08:52.028 READ: bw=92.9MiB/s (97.5MB/s), 92.9MiB/s-92.9MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:08:52.028 WRITE: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=185MiB (194MB), run=2001-2001msec 00:08:52.028 ----------------------------------------------------- 00:08:52.028 Suppressions used: 00:08:52.028 count bytes template 00:08:52.028 1 32 /usr/src/fio/parse.c 00:08:52.028 1 8 libtcmalloc_minimal.so 00:08:52.028 ----------------------------------------------------- 00:08:52.028 00:08:52.028 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:52.028 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:52.028 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:52.028 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:52.289 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:52.289 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:52.550 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:52.550 17:50:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:52.550 17:50:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:52.812 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:52.812 fio-3.35 00:08:52.812 Starting 1 thread 00:08:58.086 00:08:58.086 test: (groupid=0, jobs=1): err= 0: pid=64170: Fri Oct 25 17:50:16 2024 00:08:58.086 read: IOPS=19.1k, BW=74.5MiB/s (78.1MB/s)(149MiB/2001msec) 00:08:58.086 slat (nsec): min=4227, max=79471, avg=5478.24, stdev=2816.86 00:08:58.086 clat (usec): min=707, max=9606, avg=3329.19, stdev=1116.69 00:08:58.086 lat (usec): min=720, max=9618, avg=3334.67, stdev=1117.84 00:08:58.086 clat percentiles (usec): 00:08:58.086 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:08:58.086 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3130], 00:08:58.086 | 70.00th=[ 3490], 80.00th=[ 4113], 90.00th=[ 5014], 95.00th=[ 5735], 00:08:58.086 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 9110], 99.95th=[ 9241], 00:08:58.086 | 99.99th=[ 9372] 00:08:58.086 bw ( KiB/s): min=73168, max=79560, per=99.15%, avg=75592.00, stdev=3464.44, samples=3 00:08:58.086 iops : min=18292, max=19890, avg=18898.00, stdev=866.11, samples=3 00:08:58.086 write: IOPS=19.0k, BW=74.4MiB/s (78.0MB/s)(149MiB/2001msec); 0 zone resets 00:08:58.086 slat (usec): min=4, max=886, avg= 5.65, stdev= 5.32 00:08:58.086 clat (usec): min=662, max=9785, avg=3365.33, stdev=1127.07 00:08:58.086 lat (usec): min=675, max=9795, avg=3370.98, stdev=1128.24 00:08:58.086 clat percentiles (usec): 00:08:58.086 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:08:58.086 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3163], 00:08:58.086 | 70.00th=[ 3523], 80.00th=[ 4178], 90.00th=[ 5080], 95.00th=[ 5800], 00:08:58.086 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 8717], 99.95th=[ 9110], 00:08:58.086 | 99.99th=[ 9372] 00:08:58.086 bw ( KiB/s): min=73368, max=79480, per=99.25%, avg=75608.00, stdev=3367.00, samples=3 00:08:58.086 iops : min=18342, max=19870, avg=18902.00, stdev=841.75, samples=3 00:08:58.086 lat (usec) : 750=0.01%, 1000=0.01% 00:08:58.086 lat (msec) : 2=1.05%, 4=77.19%, 10=21.75% 00:08:58.086 cpu : usr=98.90%, sys=0.00%, ctx=2, majf=0, minf=607 00:08:58.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:58.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:58.086 issued rwts: total=38139,38110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:58.086 00:08:58.086 Run status group 0 (all jobs): 00:08:58.086 READ: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=149MiB (156MB), run=2001-2001msec 00:08:58.086 WRITE: bw=74.4MiB/s (78.0MB/s), 74.4MiB/s-74.4MiB/s (78.0MB/s-78.0MB/s), io=149MiB (156MB), run=2001-2001msec 00:08:58.086 ----------------------------------------------------- 00:08:58.086 Suppressions used: 00:08:58.086 count bytes template 00:08:58.086 1 32 /usr/src/fio/parse.c 00:08:58.086 1 8 libtcmalloc_minimal.so 00:08:58.086 ----------------------------------------------------- 00:08:58.086 00:08:58.086 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:58.086 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:58.086 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.086 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:58.348 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:58.348 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.610 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:58.610 17:50:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:58.610 17:50:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:58.871 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:58.871 fio-3.35 00:08:58.871 Starting 1 thread 00:09:05.463 00:09:05.463 test: (groupid=0, jobs=1): err= 0: pid=64230: Fri Oct 25 17:50:23 2024 00:09:05.463 read: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(149MiB/2001msec) 00:09:05.463 slat (nsec): min=4259, max=67133, avg=5387.60, stdev=2725.33 00:09:05.463 clat (usec): min=256, max=10280, avg=3327.63, stdev=1177.11 00:09:05.463 lat (usec): min=261, max=10341, avg=3333.02, stdev=1178.24 00:09:05.463 clat percentiles (usec): 00:09:05.463 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:05.463 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 3032], 00:09:05.463 | 70.00th=[ 3294], 80.00th=[ 3982], 90.00th=[ 5145], 95.00th=[ 5997], 00:09:05.463 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[ 8586], 99.95th=[ 8848], 00:09:05.463 | 99.99th=[10159] 00:09:05.463 bw ( KiB/s): min=70672, max=83472, per=100.00%, avg=77421.33, stdev=6428.54, samples=3 00:09:05.463 iops : min=17668, max=20868, avg=19355.33, stdev=1607.13, samples=3 00:09:05.463 write: IOPS=19.1k, BW=74.5MiB/s (78.1MB/s)(149MiB/2001msec); 0 zone resets 00:09:05.463 slat (nsec): min=4324, max=69840, avg=5465.82, stdev=2697.17 00:09:05.463 clat (usec): min=225, max=10207, avg=3352.14, stdev=1173.63 00:09:05.463 lat (usec): min=229, max=10225, avg=3357.61, stdev=1174.71 00:09:05.463 clat percentiles (usec): 00:09:05.463 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:09:05.463 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3064], 00:09:05.463 | 70.00th=[ 3326], 80.00th=[ 4015], 90.00th=[ 5145], 95.00th=[ 6063], 00:09:05.463 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 8586], 99.95th=[ 8979], 00:09:05.463 | 99.99th=[10159] 00:09:05.463 bw ( KiB/s): min=70680, max=83760, per=100.00%, avg=77549.33, stdev=6564.83, samples=3 00:09:05.463 iops : min=17670, max=20940, avg=19387.33, stdev=1641.21, samples=3 00:09:05.463 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:05.463 lat (msec) : 2=0.83%, 4=79.08%, 10=20.03%, 20=0.02% 00:09:05.463 cpu : usr=98.90%, sys=0.10%, ctx=20, majf=0, minf=607 00:09:05.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.464 issued rwts: total=38208,38176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.464 00:09:05.464 Run status group 0 (all jobs): 00:09:05.464 READ: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=149MiB (156MB), run=2001-2001msec 00:09:05.464 WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=149MiB (156MB), run=2001-2001msec 00:09:05.464 ----------------------------------------------------- 00:09:05.464 Suppressions used: 00:09:05.464 count bytes template 00:09:05.464 1 32 /usr/src/fio/parse.c 00:09:05.464 1 8 libtcmalloc_minimal.so 00:09:05.464 ----------------------------------------------------- 00:09:05.464 00:09:05.464 17:50:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:05.464 17:50:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.464 17:50:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:05.464 17:50:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.464 17:50:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:05.464 17:50:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:05.726 17:50:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:05.726 17:50:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:05.726 17:50:24 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:05.987 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:05.987 fio-3.35 00:09:05.987 Starting 1 thread 00:09:14.142 00:09:14.142 test: (groupid=0, jobs=1): err= 0: pid=64286: Fri Oct 25 17:50:31 2024 00:09:14.142 read: IOPS=18.5k, BW=72.4MiB/s (76.0MB/s)(145MiB/2001msec) 00:09:14.142 slat (nsec): min=4248, max=72716, avg=5594.15, stdev=2948.42 00:09:14.142 clat (usec): min=731, max=9068, avg=3428.52, stdev=1182.60 00:09:14.142 lat (usec): min=743, max=9140, avg=3434.12, stdev=1183.84 00:09:14.142 clat percentiles (usec): 00:09:14.142 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:09:14.142 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3228], 00:09:14.142 | 70.00th=[ 3687], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 5866], 00:09:14.142 | 99.00th=[ 6915], 99.50th=[ 7373], 99.90th=[ 8160], 99.95th=[ 8455], 00:09:14.142 | 99.99th=[ 8848] 00:09:14.142 bw ( KiB/s): min=68886, max=79568, per=100.00%, avg=74914.00, stdev=5471.95, samples=3 00:09:14.142 iops : min=17221, max=19892, avg=18728.33, stdev=1368.26, samples=3 00:09:14.142 write: IOPS=18.6k, BW=72.5MiB/s (76.0MB/s)(145MiB/2001msec); 0 zone resets 00:09:14.142 slat (usec): min=4, max=100, avg= 5.74, stdev= 3.06 00:09:14.142 clat (usec): min=599, max=8982, avg=3442.68, stdev=1186.89 00:09:14.142 lat (usec): min=610, max=9001, avg=3448.41, stdev=1188.09 00:09:14.142 clat percentiles (usec): 00:09:14.142 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:14.142 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2966], 60.00th=[ 3228], 00:09:14.142 | 70.00th=[ 3687], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5932], 00:09:14.142 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8094], 99.95th=[ 8356], 00:09:14.142 | 99.99th=[ 8848] 00:09:14.142 bw ( KiB/s): min=68838, max=79488, per=100.00%, avg=74959.33, stdev=5500.73, samples=3 00:09:14.142 iops : min=17209, max=19872, avg=18739.67, stdev=1375.46, samples=3 00:09:14.142 lat (usec) : 750=0.01%, 1000=0.01% 00:09:14.142 lat (msec) : 2=0.72%, 4=73.72%, 10=25.54% 00:09:14.142 cpu : usr=98.85%, sys=0.05%, ctx=6, majf=0, minf=605 00:09:14.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:14.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.142 issued rwts: total=37113,37139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.142 00:09:14.142 Run status group 0 (all jobs): 00:09:14.142 READ: bw=72.4MiB/s (76.0MB/s), 72.4MiB/s-72.4MiB/s (76.0MB/s-76.0MB/s), io=145MiB (152MB), run=2001-2001msec 00:09:14.142 WRITE: bw=72.5MiB/s (76.0MB/s), 72.5MiB/s-72.5MiB/s (76.0MB/s-76.0MB/s), io=145MiB (152MB), run=2001-2001msec 00:09:14.142 ----------------------------------------------------- 00:09:14.142 Suppressions used: 00:09:14.142 count bytes template 00:09:14.142 1 32 /usr/src/fio/parse.c 00:09:14.142 1 8 libtcmalloc_minimal.so 00:09:14.142 ----------------------------------------------------- 00:09:14.142 00:09:14.142 17:50:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:14.142 17:50:31 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:14.142 00:09:14.142 real 0m28.876s 00:09:14.142 user 0m16.653s 00:09:14.142 sys 0m22.952s 00:09:14.142 17:50:31 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.142 17:50:31 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:14.142 ************************************ 00:09:14.142 END TEST nvme_fio 00:09:14.142 ************************************ 00:09:14.142 00:09:14.142 real 1m37.854s 00:09:14.142 user 3m36.932s 00:09:14.142 sys 0m33.364s 00:09:14.142 17:50:31 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.142 17:50:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.142 ************************************ 00:09:14.142 END TEST nvme 00:09:14.142 ************************************ 00:09:14.142 17:50:31 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:14.142 17:50:31 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.142 17:50:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.142 17:50:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.142 17:50:31 -- common/autotest_common.sh@10 -- # set +x 00:09:14.142 ************************************ 00:09:14.142 START TEST nvme_scc 00:09:14.142 ************************************ 00:09:14.142 17:50:31 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.142 * Looking for test storage... 00:09:14.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1689 -- # lcov --version 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:14.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.142 --rc genhtml_branch_coverage=1 00:09:14.142 --rc genhtml_function_coverage=1 00:09:14.142 --rc genhtml_legend=1 00:09:14.142 --rc geninfo_all_blocks=1 00:09:14.142 --rc geninfo_unexecuted_blocks=1 00:09:14.142 00:09:14.142 ' 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:14.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.142 --rc genhtml_branch_coverage=1 00:09:14.142 --rc genhtml_function_coverage=1 00:09:14.142 --rc genhtml_legend=1 00:09:14.142 --rc geninfo_all_blocks=1 00:09:14.142 --rc geninfo_unexecuted_blocks=1 00:09:14.142 00:09:14.142 ' 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:14.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.142 --rc genhtml_branch_coverage=1 00:09:14.142 --rc genhtml_function_coverage=1 00:09:14.142 --rc genhtml_legend=1 00:09:14.142 --rc geninfo_all_blocks=1 00:09:14.142 --rc geninfo_unexecuted_blocks=1 00:09:14.142 00:09:14.142 ' 00:09:14.142 17:50:32 nvme_scc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:14.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.142 --rc genhtml_branch_coverage=1 00:09:14.142 --rc genhtml_function_coverage=1 00:09:14.142 --rc genhtml_legend=1 00:09:14.142 --rc geninfo_all_blocks=1 00:09:14.142 --rc geninfo_unexecuted_blocks=1 00:09:14.142 00:09:14.142 ' 00:09:14.142 17:50:32 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.142 17:50:32 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.142 17:50:32 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:14.142 17:50:32 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:14.142 17:50:32 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.142 17:50:32 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.142 17:50:32 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.142 17:50:32 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.142 17:50:32 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.142 17:50:32 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:14.143 17:50:32 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:14.143 17:50:32 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:14.143 17:50:32 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.143 17:50:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:14.143 17:50:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:14.143 17:50:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:14.143 17:50:32 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:14.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.404 Waiting for block devices as requested 00:09:14.404 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.404 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.404 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.666 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.968 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:19.968 17:50:37 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:19.968 17:50:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.968 17:50:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:19.968 17:50:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.968 17:50:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:19.968 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:19.969 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:19.970 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.971 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:19.971 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:19.971 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.971 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.971 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:19.972 17:50:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:19.972 17:50:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.972 17:50:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:19.973 17:50:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.973 17:50:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.973 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.974 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.975 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.976 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:19.977 17:50:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.977 17:50:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:19.977 17:50:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.977 17:50:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:19.977 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:19.978 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:19.979 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.980 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:19.981 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.982 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.983 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:19.984 17:50:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.984 17:50:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:19.984 17:50:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.984 17:50:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:19.984 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.985 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.986 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:19.987 17:50:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.987 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:19.988 17:50:38 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:19.988 17:50:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:19.988 17:50:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:19.988 17:50:38 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.823 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.823 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.085 17:50:39 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:21.085 17:50:39 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:21.085 17:50:39 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.085 17:50:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:21.085 ************************************ 00:09:21.085 START TEST nvme_simple_copy 00:09:21.085 ************************************ 00:09:21.085 17:50:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:21.346 Initializing NVMe Controllers 00:09:21.346 Attaching to 0000:00:10.0 00:09:21.346 Controller supports SCC. Attached to 0000:00:10.0 00:09:21.346 Namespace ID: 1 size: 6GB 00:09:21.346 Initialization complete. 00:09:21.346 00:09:21.346 Controller QEMU NVMe Ctrl (12340 ) 00:09:21.346 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:21.346 Namespace Block Size:4096 00:09:21.346 Writing LBAs 0 to 63 with Random Data 00:09:21.346 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:21.346 LBAs matching Written Data: 64 00:09:21.346 00:09:21.346 real 0m0.288s 00:09:21.346 user 0m0.122s 00:09:21.346 sys 0m0.063s 00:09:21.346 ************************************ 00:09:21.346 END TEST nvme_simple_copy 00:09:21.346 ************************************ 00:09:21.346 17:50:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.346 17:50:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:21.346 00:09:21.346 real 0m7.712s 00:09:21.346 user 0m1.072s 00:09:21.346 sys 0m1.423s 00:09:21.346 17:50:39 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.346 17:50:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:21.346 ************************************ 00:09:21.346 END TEST nvme_scc 00:09:21.346 ************************************ 00:09:21.346 17:50:39 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:21.346 17:50:39 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:21.346 17:50:39 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:21.346 17:50:39 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:21.346 17:50:39 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:21.346 17:50:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.346 17:50:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.346 17:50:39 -- common/autotest_common.sh@10 -- # set +x 00:09:21.346 ************************************ 00:09:21.346 START TEST nvme_fdp 00:09:21.346 ************************************ 00:09:21.346 17:50:39 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:21.608 * Looking for test storage... 00:09:21.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1689 -- # lcov --version 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:21.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.608 --rc genhtml_branch_coverage=1 00:09:21.608 --rc genhtml_function_coverage=1 00:09:21.608 --rc genhtml_legend=1 00:09:21.608 --rc geninfo_all_blocks=1 00:09:21.608 --rc geninfo_unexecuted_blocks=1 00:09:21.608 00:09:21.608 ' 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:21.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.608 --rc genhtml_branch_coverage=1 00:09:21.608 --rc genhtml_function_coverage=1 00:09:21.608 --rc genhtml_legend=1 00:09:21.608 --rc geninfo_all_blocks=1 00:09:21.608 --rc geninfo_unexecuted_blocks=1 00:09:21.608 00:09:21.608 ' 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:21.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.608 --rc genhtml_branch_coverage=1 00:09:21.608 --rc genhtml_function_coverage=1 00:09:21.608 --rc genhtml_legend=1 00:09:21.608 --rc geninfo_all_blocks=1 00:09:21.608 --rc geninfo_unexecuted_blocks=1 00:09:21.608 00:09:21.608 ' 00:09:21.608 17:50:39 nvme_fdp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:21.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.608 --rc genhtml_branch_coverage=1 00:09:21.608 --rc genhtml_function_coverage=1 00:09:21.608 --rc genhtml_legend=1 00:09:21.608 --rc geninfo_all_blocks=1 00:09:21.608 --rc geninfo_unexecuted_blocks=1 00:09:21.608 00:09:21.608 ' 00:09:21.608 17:50:39 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.608 17:50:39 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.608 17:50:39 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:21.608 17:50:39 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.608 17:50:39 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.608 17:50:39 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.609 17:50:39 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.609 17:50:39 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.609 17:50:39 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.609 17:50:39 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.609 17:50:39 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.609 17:50:39 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:21.609 17:50:39 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:21.609 17:50:39 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:21.609 17:50:39 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.609 17:50:39 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.132 Waiting for block devices as requested 00:09:22.132 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.132 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.132 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.394 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.695 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.695 17:50:45 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:27.695 17:50:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.695 17:50:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:27.695 17:50:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.695 17:50:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.695 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:27.696 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:27.697 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.698 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.699 17:50:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:27.700 17:50:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.700 17:50:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:27.700 17:50:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.700 17:50:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:27.700 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.701 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.702 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.703 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:27.704 17:50:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.704 17:50:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:27.704 17:50:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.704 17:50:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.704 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.705 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.706 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.707 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.708 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:27.709 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.710 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.711 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:27.712 17:50:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:27.712 17:50:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:27.712 17:50:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.712 17:50:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:27.712 17:50:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.712 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.713 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.714 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:27.715 17:50:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:27.715 17:50:46 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:27.715 17:50:46 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.864 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.864 17:50:47 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:28.864 17:50:47 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:28.864 17:50:47 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.864 17:50:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:28.864 ************************************ 00:09:28.864 START TEST nvme_flexible_data_placement 00:09:28.864 ************************************ 00:09:28.864 17:50:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:29.125 Initializing NVMe Controllers 00:09:29.125 Attaching to 0000:00:13.0 00:09:29.125 Controller supports FDP Attached to 0000:00:13.0 00:09:29.125 Namespace ID: 1 Endurance Group ID: 1 00:09:29.125 Initialization complete. 00:09:29.125 00:09:29.125 ================================== 00:09:29.125 == FDP tests for Namespace: #01 == 00:09:29.125 ================================== 00:09:29.125 00:09:29.125 Get Feature: FDP: 00:09:29.125 ================= 00:09:29.125 Enabled: Yes 00:09:29.125 FDP configuration Index: 0 00:09:29.125 00:09:29.125 FDP configurations log page 00:09:29.125 =========================== 00:09:29.125 Number of FDP configurations: 1 00:09:29.125 Version: 0 00:09:29.125 Size: 112 00:09:29.125 FDP Configuration Descriptor: 0 00:09:29.125 Descriptor Size: 96 00:09:29.125 Reclaim Group Identifier format: 2 00:09:29.125 FDP Volatile Write Cache: Not Present 00:09:29.125 FDP Configuration: Valid 00:09:29.125 Vendor Specific Size: 0 00:09:29.125 Number of Reclaim Groups: 2 00:09:29.125 Number of Recalim Unit Handles: 8 00:09:29.125 Max Placement Identifiers: 128 00:09:29.125 Number of Namespaces Suppprted: 256 00:09:29.125 Reclaim unit Nominal Size: 6000000 bytes 00:09:29.125 Estimated Reclaim Unit Time Limit: Not Reported 00:09:29.125 RUH Desc #000: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #001: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #002: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #003: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #004: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #005: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #006: RUH Type: Initially Isolated 00:09:29.125 RUH Desc #007: RUH Type: Initially Isolated 00:09:29.125 00:09:29.125 FDP reclaim unit handle usage log page 00:09:29.125 ====================================== 00:09:29.125 Number of Reclaim Unit Handles: 8 00:09:29.125 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:29.125 RUH Usage Desc #001: RUH Attributes: Unused 00:09:29.125 RUH Usage Desc #002: RUH Attributes: Unused 00:09:29.125 RUH Usage Desc #003: RUH Attributes: Unused 00:09:29.126 RUH Usage Desc #004: RUH Attributes: Unused 00:09:29.126 RUH Usage Desc #005: RUH Attributes: Unused 00:09:29.126 RUH Usage Desc #006: RUH Attributes: Unused 00:09:29.126 RUH Usage Desc #007: RUH Attributes: Unused 00:09:29.126 00:09:29.126 FDP statistics log page 00:09:29.126 ======================= 00:09:29.126 Host bytes with metadata written: 1088217088 00:09:29.126 Media bytes with metadata written: 1088319488 00:09:29.126 Media bytes erased: 0 00:09:29.126 00:09:29.126 FDP Reclaim unit handle status 00:09:29.126 ============================== 00:09:29.126 Number of RUHS descriptors: 2 00:09:29.126 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001232 00:09:29.126 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:29.126 00:09:29.126 FDP write on placement id: 0 success 00:09:29.126 00:09:29.126 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:29.126 00:09:29.126 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:29.126 00:09:29.126 Get Feature: FDP Events for Placement handle: #0 00:09:29.126 ======================== 00:09:29.126 Number of FDP Events: 6 00:09:29.126 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:29.126 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:29.126 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:29.126 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:29.126 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:29.126 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:29.126 00:09:29.126 FDP events log page 00:09:29.126 =================== 00:09:29.126 Number of FDP events: 1 00:09:29.126 FDP Event #0: 00:09:29.126 Event Type: RU Not Written to Capacity 00:09:29.126 Placement Identifier: Valid 00:09:29.126 NSID: Valid 00:09:29.126 Location: Valid 00:09:29.126 Placement Identifier: 0 00:09:29.126 Event Timestamp: 7 00:09:29.126 Namespace Identifier: 1 00:09:29.126 Reclaim Group Identifier: 0 00:09:29.126 Reclaim Unit Handle Identifier: 0 00:09:29.126 00:09:29.126 FDP test passed 00:09:29.126 00:09:29.126 real 0m0.243s 00:09:29.126 user 0m0.073s 00:09:29.126 sys 0m0.068s 00:09:29.126 17:50:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.126 17:50:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:29.126 ************************************ 00:09:29.126 END TEST nvme_flexible_data_placement 00:09:29.126 ************************************ 00:09:29.126 ************************************ 00:09:29.126 END TEST nvme_fdp 00:09:29.126 ************************************ 00:09:29.126 00:09:29.126 real 0m7.711s 00:09:29.126 user 0m1.037s 00:09:29.126 sys 0m1.418s 00:09:29.126 17:50:47 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.126 17:50:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:29.126 17:50:47 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:29.126 17:50:47 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:29.126 17:50:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.126 17:50:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.126 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:09:29.126 ************************************ 00:09:29.126 START TEST nvme_rpc 00:09:29.126 ************************************ 00:09:29.126 17:50:47 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:29.126 * Looking for test storage... 00:09:29.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.388 17:50:47 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.388 --rc genhtml_branch_coverage=1 00:09:29.388 --rc genhtml_function_coverage=1 00:09:29.388 --rc genhtml_legend=1 00:09:29.388 --rc geninfo_all_blocks=1 00:09:29.388 --rc geninfo_unexecuted_blocks=1 00:09:29.388 00:09:29.388 ' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.388 --rc genhtml_branch_coverage=1 00:09:29.388 --rc genhtml_function_coverage=1 00:09:29.388 --rc genhtml_legend=1 00:09:29.388 --rc geninfo_all_blocks=1 00:09:29.388 --rc geninfo_unexecuted_blocks=1 00:09:29.388 00:09:29.388 ' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.388 --rc genhtml_branch_coverage=1 00:09:29.388 --rc genhtml_function_coverage=1 00:09:29.388 --rc genhtml_legend=1 00:09:29.388 --rc geninfo_all_blocks=1 00:09:29.388 --rc geninfo_unexecuted_blocks=1 00:09:29.388 00:09:29.388 ' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.388 --rc genhtml_branch_coverage=1 00:09:29.388 --rc genhtml_function_coverage=1 00:09:29.388 --rc genhtml_legend=1 00:09:29.388 --rc geninfo_all_blocks=1 00:09:29.388 --rc geninfo_unexecuted_blocks=1 00:09:29.388 00:09:29.388 ' 00:09:29.388 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.388 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1505 -- # bdfs=() 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1505 -- # local bdfs 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1494 -- # bdfs=() 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1494 -- # local bdfs 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:09:29.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.388 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:29.388 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65654 00:09:29.388 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:29.388 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65654 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 65654 ']' 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.388 17:50:47 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.389 17:50:47 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.389 17:50:47 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.389 17:50:47 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:29.389 17:50:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.389 [2024-10-25 17:50:47.775421] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:29.389 [2024-10-25 17:50:47.775711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65654 ] 00:09:29.649 [2024-10-25 17:50:47.934546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.649 [2024-10-25 17:50:48.047963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.649 [2024-10-25 17:50:48.048024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.288 17:50:48 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.288 17:50:48 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:30.288 17:50:48 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:30.550 Nvme0n1 00:09:30.550 17:50:48 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:30.550 17:50:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:30.811 request: 00:09:30.811 { 00:09:30.811 "bdev_name": "Nvme0n1", 00:09:30.811 "filename": "non_existing_file", 00:09:30.811 "method": "bdev_nvme_apply_firmware", 00:09:30.811 "req_id": 1 00:09:30.811 } 00:09:30.811 Got JSON-RPC error response 00:09:30.811 response: 00:09:30.811 { 00:09:30.811 "code": -32603, 00:09:30.811 "message": "open file failed." 00:09:30.811 } 00:09:30.811 17:50:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:30.811 17:50:49 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:30.811 17:50:49 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:31.073 17:50:49 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:31.073 17:50:49 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65654 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 65654 ']' 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 65654 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65654 00:09:31.073 killing process with pid 65654 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65654' 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@969 -- # kill 65654 00:09:31.073 17:50:49 nvme_rpc -- common/autotest_common.sh@974 -- # wait 65654 00:09:32.991 ************************************ 00:09:32.991 END TEST nvme_rpc 00:09:32.991 ************************************ 00:09:32.991 00:09:32.991 real 0m3.745s 00:09:32.991 user 0m7.047s 00:09:32.991 sys 0m0.592s 00:09:32.991 17:50:51 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.991 17:50:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.991 17:50:51 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:32.991 17:50:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.991 17:50:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.991 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:09:32.991 ************************************ 00:09:32.991 START TEST nvme_rpc_timeouts 00:09:32.991 ************************************ 00:09:32.991 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:32.991 * Looking for test storage... 00:09:32.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.991 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:32.991 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:32.991 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lcov --version 00:09:33.253 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.253 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.254 17:50:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.254 --rc genhtml_branch_coverage=1 00:09:33.254 --rc genhtml_function_coverage=1 00:09:33.254 --rc genhtml_legend=1 00:09:33.254 --rc geninfo_all_blocks=1 00:09:33.254 --rc geninfo_unexecuted_blocks=1 00:09:33.254 00:09:33.254 ' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.254 --rc genhtml_branch_coverage=1 00:09:33.254 --rc genhtml_function_coverage=1 00:09:33.254 --rc genhtml_legend=1 00:09:33.254 --rc geninfo_all_blocks=1 00:09:33.254 --rc geninfo_unexecuted_blocks=1 00:09:33.254 00:09:33.254 ' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.254 --rc genhtml_branch_coverage=1 00:09:33.254 --rc genhtml_function_coverage=1 00:09:33.254 --rc genhtml_legend=1 00:09:33.254 --rc geninfo_all_blocks=1 00:09:33.254 --rc geninfo_unexecuted_blocks=1 00:09:33.254 00:09:33.254 ' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.254 --rc genhtml_branch_coverage=1 00:09:33.254 --rc genhtml_function_coverage=1 00:09:33.254 --rc genhtml_legend=1 00:09:33.254 --rc geninfo_all_blocks=1 00:09:33.254 --rc geninfo_unexecuted_blocks=1 00:09:33.254 00:09:33.254 ' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65725 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65725 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65758 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65758 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 65758 ']' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.254 17:50:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:33.254 17:50:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:33.254 [2024-10-25 17:50:51.578594] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:09:33.254 [2024-10-25 17:50:51.578745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65758 ] 00:09:33.516 [2024-10-25 17:50:51.741809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.516 [2024-10-25 17:50:51.907848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.516 [2024-10-25 17:50:51.907940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.462 Checking default timeout settings: 00:09:34.462 17:50:52 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.462 17:50:52 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:34.462 17:50:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:34.462 17:50:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:34.723 Making settings changes with rpc: 00:09:34.723 17:50:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:34.723 17:50:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:34.986 Check default vs. modified settings: 00:09:34.986 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:34.986 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65725 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65725 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.248 Setting action_on_timeout is changed as expected. 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65725 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.248 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65725 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.249 Setting timeout_us is changed as expected. 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65725 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65725 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.249 Setting timeout_admin_us is changed as expected. 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65725 /tmp/settings_modified_65725 00:09:35.249 17:50:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65758 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 65758 ']' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 65758 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65758 00:09:35.249 killing process with pid 65758 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65758' 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 65758 00:09:35.249 17:50:53 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 65758 00:09:37.169 RPC TIMEOUT SETTING TEST PASSED. 00:09:37.169 17:50:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:37.169 00:09:37.169 real 0m3.763s 00:09:37.169 user 0m7.096s 00:09:37.169 sys 0m0.690s 00:09:37.169 17:50:55 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.169 ************************************ 00:09:37.169 END TEST nvme_rpc_timeouts 00:09:37.169 ************************************ 00:09:37.169 17:50:55 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:37.169 17:50:55 -- spdk/autotest.sh@239 -- # uname -s 00:09:37.169 17:50:55 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:37.169 17:50:55 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:37.169 17:50:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.169 17:50:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.169 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:09:37.169 ************************************ 00:09:37.169 START TEST sw_hotplug 00:09:37.169 ************************************ 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:37.169 * Looking for test storage... 00:09:37.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1689 -- # lcov --version 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.169 17:50:55 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.169 --rc genhtml_branch_coverage=1 00:09:37.169 --rc genhtml_function_coverage=1 00:09:37.169 --rc genhtml_legend=1 00:09:37.169 --rc geninfo_all_blocks=1 00:09:37.169 --rc geninfo_unexecuted_blocks=1 00:09:37.169 00:09:37.169 ' 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.169 --rc genhtml_branch_coverage=1 00:09:37.169 --rc genhtml_function_coverage=1 00:09:37.169 --rc genhtml_legend=1 00:09:37.169 --rc geninfo_all_blocks=1 00:09:37.169 --rc geninfo_unexecuted_blocks=1 00:09:37.169 00:09:37.169 ' 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.169 --rc genhtml_branch_coverage=1 00:09:37.169 --rc genhtml_function_coverage=1 00:09:37.169 --rc genhtml_legend=1 00:09:37.169 --rc geninfo_all_blocks=1 00:09:37.169 --rc geninfo_unexecuted_blocks=1 00:09:37.169 00:09:37.169 ' 00:09:37.169 17:50:55 sw_hotplug -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.169 --rc genhtml_branch_coverage=1 00:09:37.169 --rc genhtml_function_coverage=1 00:09:37.169 --rc genhtml_legend=1 00:09:37.169 --rc geninfo_all_blocks=1 00:09:37.169 --rc geninfo_unexecuted_blocks=1 00:09:37.169 00:09:37.169 ' 00:09:37.169 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.432 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.432 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.432 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.432 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:37.432 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:37.432 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:37.432 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:37.432 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.432 17:50:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:37.433 17:50:55 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:37.433 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:37.433 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:37.433 17:50:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.956 Waiting for block devices as requested 00:09:37.956 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.218 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.218 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.515 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:43.515 17:51:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:43.515 17:51:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.776 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:43.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.776 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:44.038 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:44.300 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.300 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.300 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:44.300 17:51:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66620 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:44.561 17:51:02 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:44.561 17:51:02 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:44.561 17:51:02 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:44.561 17:51:02 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:44.561 17:51:02 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:44.561 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:44.562 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:44.562 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:44.562 17:51:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:44.562 Initializing NVMe Controllers 00:09:44.562 Attaching to 0000:00:10.0 00:09:44.562 Attaching to 0000:00:11.0 00:09:44.562 Attached to 0000:00:11.0 00:09:44.562 Attached to 0000:00:10.0 00:09:44.562 Initialization complete. Starting I/O... 00:09:44.562 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:44.562 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:44.562 00:09:45.951 QEMU NVMe Ctrl (12341 ): 2624 I/Os completed (+2624) 00:09:45.951 QEMU NVMe Ctrl (12340 ): 2627 I/Os completed (+2627) 00:09:45.951 00:09:46.525 QEMU NVMe Ctrl (12341 ): 5880 I/Os completed (+3256) 00:09:46.525 QEMU NVMe Ctrl (12340 ): 5883 I/Os completed (+3256) 00:09:46.525 00:09:47.940 QEMU NVMe Ctrl (12341 ): 9128 I/Os completed (+3248) 00:09:47.940 QEMU NVMe Ctrl (12340 ): 9135 I/Os completed (+3252) 00:09:47.940 00:09:48.884 QEMU NVMe Ctrl (12341 ): 12416 I/Os completed (+3288) 00:09:48.884 QEMU NVMe Ctrl (12340 ): 12423 I/Os completed (+3288) 00:09:48.884 00:09:49.828 QEMU NVMe Ctrl (12341 ): 15504 I/Os completed (+3088) 00:09:49.828 QEMU NVMe Ctrl (12340 ): 15511 I/Os completed (+3088) 00:09:49.828 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.400 [2024-10-25 17:51:08.764805] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:50.400 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:50.400 [2024-10-25 17:51:08.766057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.766098] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.766114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.766133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:50.400 [2024-10-25 17:51:08.767904] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.767947] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.767961] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.767975] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.400 [2024-10-25 17:51:08.789069] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:50.400 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:50.400 [2024-10-25 17:51:08.790193] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.790309] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.790350] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.790411] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:50.400 [2024-10-25 17:51:08.792112] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.792212] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.792250] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 [2024-10-25 17:51:08.792317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:50.400 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:50.659 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.659 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.659 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:50.659 00:09:50.659 17:51:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:50.659 Attaching to 0000:00:10.0 00:09:50.659 Attached to 0000:00:10.0 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.659 17:51:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:50.659 Attaching to 0000:00:11.0 00:09:50.659 Attached to 0000:00:11.0 00:09:51.594 QEMU NVMe Ctrl (12340 ): 3123 I/Os completed (+3123) 00:09:51.594 QEMU NVMe Ctrl (12341 ): 2831 I/Os completed (+2831) 00:09:51.594 00:09:52.535 QEMU NVMe Ctrl (12340 ): 6331 I/Os completed (+3208) 00:09:52.535 QEMU NVMe Ctrl (12341 ): 6000 I/Os completed (+3169) 00:09:52.535 00:09:53.922 QEMU NVMe Ctrl (12340 ): 9491 I/Os completed (+3160) 00:09:53.922 QEMU NVMe Ctrl (12341 ): 9183 I/Os completed (+3183) 00:09:53.922 00:09:54.867 QEMU NVMe Ctrl (12340 ): 12719 I/Os completed (+3228) 00:09:54.867 QEMU NVMe Ctrl (12341 ): 12425 I/Os completed (+3242) 00:09:54.867 00:09:55.812 QEMU NVMe Ctrl (12340 ): 15935 I/Os completed (+3216) 00:09:55.812 QEMU NVMe Ctrl (12341 ): 15641 I/Os completed (+3216) 00:09:55.812 00:09:56.756 QEMU NVMe Ctrl (12340 ): 19199 I/Os completed (+3264) 00:09:56.756 QEMU NVMe Ctrl (12341 ): 18905 I/Os completed (+3264) 00:09:56.756 00:09:57.715 QEMU NVMe Ctrl (12340 ): 22847 I/Os completed (+3648) 00:09:57.715 QEMU NVMe Ctrl (12341 ): 22564 I/Os completed (+3659) 00:09:57.715 00:09:58.647 QEMU NVMe Ctrl (12340 ): 26529 I/Os completed (+3682) 00:09:58.647 QEMU NVMe Ctrl (12341 ): 26247 I/Os completed (+3683) 00:09:58.647 00:09:59.587 QEMU NVMe Ctrl (12340 ): 30050 I/Os completed (+3521) 00:09:59.587 QEMU NVMe Ctrl (12341 ): 29831 I/Os completed (+3584) 00:09:59.587 00:10:00.532 QEMU NVMe Ctrl (12340 ): 33210 I/Os completed (+3160) 00:10:00.532 QEMU NVMe Ctrl (12341 ): 33021 I/Os completed (+3190) 00:10:00.532 00:10:01.918 QEMU NVMe Ctrl (12340 ): 36358 I/Os completed (+3148) 00:10:01.918 QEMU NVMe Ctrl (12341 ): 36205 I/Os completed (+3184) 00:10:01.918 00:10:02.859 QEMU NVMe Ctrl (12340 ): 39618 I/Os completed (+3260) 00:10:02.859 QEMU NVMe Ctrl (12341 ): 39466 I/Os completed (+3261) 00:10:02.859 00:10:02.859 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:02.859 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:02.859 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.859 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.859 [2024-10-25 17:51:21.094811] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:02.859 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:02.859 [2024-10-25 17:51:21.097533] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 [2024-10-25 17:51:21.097684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 [2024-10-25 17:51:21.097721] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 [2024-10-25 17:51:21.098066] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:02.859 [2024-10-25 17:51:21.100231] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 [2024-10-25 17:51:21.100278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 [2024-10-25 17:51:21.100295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 [2024-10-25 17:51:21.100309] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.859 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.859 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.859 [2024-10-25 17:51:21.116449] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:02.859 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:02.859 [2024-10-25 17:51:21.117530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 [2024-10-25 17:51:21.117580] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 [2024-10-25 17:51:21.117601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 [2024-10-25 17:51:21.117616] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:02.860 [2024-10-25 17:51:21.119300] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 [2024-10-25 17:51:21.119330] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 [2024-10-25 17:51:21.119345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 [2024-10-25 17:51:21.119360] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.860 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:02.860 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:02.860 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:02.860 EAL: Scan for (pci) bus failed. 00:10:02.860 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.860 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.860 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:03.119 Attaching to 0000:00:10.0 00:10:03.119 Attached to 0000:00:10.0 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:03.119 17:51:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:03.119 Attaching to 0000:00:11.0 00:10:03.119 Attached to 0000:00:11.0 00:10:03.689 QEMU NVMe Ctrl (12340 ): 1884 I/Os completed (+1884) 00:10:03.689 QEMU NVMe Ctrl (12341 ): 1624 I/Os completed (+1624) 00:10:03.689 00:10:04.628 QEMU NVMe Ctrl (12340 ): 5160 I/Os completed (+3276) 00:10:04.628 QEMU NVMe Ctrl (12341 ): 4900 I/Os completed (+3276) 00:10:04.628 00:10:05.571 QEMU NVMe Ctrl (12340 ): 8400 I/Os completed (+3240) 00:10:05.571 QEMU NVMe Ctrl (12341 ): 8107 I/Os completed (+3207) 00:10:05.571 00:10:06.955 QEMU NVMe Ctrl (12340 ): 11576 I/Os completed (+3176) 00:10:06.955 QEMU NVMe Ctrl (12341 ): 11283 I/Os completed (+3176) 00:10:06.955 00:10:07.549 QEMU NVMe Ctrl (12340 ): 14720 I/Os completed (+3144) 00:10:07.549 QEMU NVMe Ctrl (12341 ): 14427 I/Os completed (+3144) 00:10:07.549 00:10:08.934 QEMU NVMe Ctrl (12340 ): 17403 I/Os completed (+2683) 00:10:08.934 QEMU NVMe Ctrl (12341 ): 17112 I/Os completed (+2685) 00:10:08.934 00:10:09.876 QEMU NVMe Ctrl (12340 ): 20587 I/Os completed (+3184) 00:10:09.876 QEMU NVMe Ctrl (12341 ): 20296 I/Os completed (+3184) 00:10:09.876 00:10:10.819 QEMU NVMe Ctrl (12340 ): 23795 I/Os completed (+3208) 00:10:10.819 QEMU NVMe Ctrl (12341 ): 23514 I/Os completed (+3218) 00:10:10.819 00:10:11.763 QEMU NVMe Ctrl (12340 ): 26931 I/Os completed (+3136) 00:10:11.763 QEMU NVMe Ctrl (12341 ): 26662 I/Os completed (+3148) 00:10:11.763 00:10:12.707 QEMU NVMe Ctrl (12340 ): 29984 I/Os completed (+3053) 00:10:12.707 QEMU NVMe Ctrl (12341 ): 29724 I/Os completed (+3062) 00:10:12.707 00:10:13.652 QEMU NVMe Ctrl (12340 ): 33063 I/Os completed (+3079) 00:10:13.652 QEMU NVMe Ctrl (12341 ): 32740 I/Os completed (+3016) 00:10:13.652 00:10:14.596 QEMU NVMe Ctrl (12340 ): 36291 I/Os completed (+3228) 00:10:14.596 QEMU NVMe Ctrl (12341 ): 35968 I/Os completed (+3228) 00:10:14.596 00:10:15.178 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:15.178 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:15.178 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.178 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.178 [2024-10-25 17:51:33.428767] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:15.178 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:15.178 [2024-10-25 17:51:33.430305] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 [2024-10-25 17:51:33.430434] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 [2024-10-25 17:51:33.430472] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 [2024-10-25 17:51:33.430536] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.178 [2024-10-25 17:51:33.432503] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 [2024-10-25 17:51:33.432638] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 [2024-10-25 17:51:33.432682] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.178 [2024-10-25 17:51:33.433127] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.179 [2024-10-25 17:51:33.450485] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:15.179 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:15.179 [2024-10-25 17:51:33.451729] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 [2024-10-25 17:51:33.451797] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 [2024-10-25 17:51:33.451834] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 [2024-10-25 17:51:33.451876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.179 [2024-10-25 17:51:33.453739] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 [2024-10-25 17:51:33.453839] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 [2024-10-25 17:51:33.453915] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 [2024-10-25 17:51:33.453946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:15.179 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:15.179 EAL: Scan for (pci) bus failed. 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.179 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:15.452 Attaching to 0000:00:10.0 00:10:15.452 Attached to 0000:00:10.0 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.452 17:51:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:15.452 Attaching to 0000:00:11.0 00:10:15.452 Attached to 0000:00:11.0 00:10:15.452 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.452 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.452 [2024-10-25 17:51:33.755663] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:27.681 17:51:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.681 17:51:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.681 17:51:45 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.98 00:10:27.681 17:51:45 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.98 00:10:27.681 17:51:45 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:27.681 17:51:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.98 00:10:27.681 17:51:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.98 2 00:10:27.681 remove_attach_helper took 42.98s to complete (handling 2 nvme drive(s)) 17:51:45 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66620 00:10:34.248 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66620) - No such process 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66620 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67170 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67170 00:10:34.248 17:51:51 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:34.248 17:51:51 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67170 ']' 00:10:34.248 17:51:51 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.248 17:51:51 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.248 17:51:51 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.248 17:51:51 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.248 17:51:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.248 [2024-10-25 17:51:51.840674] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:10:34.248 [2024-10-25 17:51:51.840992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67170 ] 00:10:34.248 [2024-10-25 17:51:52.002803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.248 [2024-10-25 17:51:52.111167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.517 17:51:52 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:34.518 17:51:52 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:34.518 17:51:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.090 17:51:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.090 17:51:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.090 17:51:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:41.090 17:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:41.090 [2024-10-25 17:51:58.898466] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:41.090 [2024-10-25 17:51:58.900124] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:58.900165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:58.900181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 [2024-10-25 17:51:58.900202] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:58.900212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:58.900223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 [2024-10-25 17:51:58.900232] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:58.900242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:58.900250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 [2024-10-25 17:51:58.900264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:58.900273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:58.900283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.090 17:51:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.090 17:51:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.090 17:51:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:41.090 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:41.090 [2024-10-25 17:51:59.498482] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:41.090 [2024-10-25 17:51:59.500216] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:59.500269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:59.500285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 [2024-10-25 17:51:59.500310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:59.500321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:59.500331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 [2024-10-25 17:51:59.500343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:59.500352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:59.500363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.090 [2024-10-25 17:51:59.500373] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.090 [2024-10-25 17:51:59.500384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.090 [2024-10-25 17:51:59.500393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.681 17:51:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.681 17:51:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.681 17:51:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:41.681 17:51:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:41.681 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.681 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.681 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.939 17:52:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.149 17:52:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.149 17:52:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.149 17:52:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.149 [2024-10-25 17:52:12.298685] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:54.149 [2024-10-25 17:52:12.300353] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.149 [2024-10-25 17:52:12.300469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.149 [2024-10-25 17:52:12.300485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.149 [2024-10-25 17:52:12.300502] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.149 [2024-10-25 17:52:12.300511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.149 [2024-10-25 17:52:12.300524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.149 [2024-10-25 17:52:12.300532] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.149 [2024-10-25 17:52:12.300540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.149 [2024-10-25 17:52:12.300547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.149 [2024-10-25 17:52:12.300564] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.149 [2024-10-25 17:52:12.300573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.149 [2024-10-25 17:52:12.300586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.149 17:52:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.149 17:52:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.149 17:52:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:54.149 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.422 [2024-10-25 17:52:12.798678] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:54.423 [2024-10-25 17:52:12.800318] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.423 [2024-10-25 17:52:12.800355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.423 [2024-10-25 17:52:12.800369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.423 [2024-10-25 17:52:12.800385] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.423 [2024-10-25 17:52:12.800398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.423 [2024-10-25 17:52:12.800410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.423 [2024-10-25 17:52:12.800419] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.423 [2024-10-25 17:52:12.800425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.423 [2024-10-25 17:52:12.800433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.423 [2024-10-25 17:52:12.800441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.423 [2024-10-25 17:52:12.800448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.423 [2024-10-25 17:52:12.800456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.423 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:54.423 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.424 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.424 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.424 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.424 17:52:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.424 17:52:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.424 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.424 17:52:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.697 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.697 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.697 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.697 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.697 17:52:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.697 17:52:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.896 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.897 17:52:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.897 17:52:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 17:52:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.897 17:52:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.897 17:52:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 17:52:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.897 [2024-10-25 17:52:25.198903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.897 [2024-10-25 17:52:25.200605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.897 [2024-10-25 17:52:25.200637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.897 [2024-10-25 17:52:25.200651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.897 [2024-10-25 17:52:25.200673] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.897 [2024-10-25 17:52:25.200682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.897 [2024-10-25 17:52:25.200698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.897 [2024-10-25 17:52:25.200707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.897 [2024-10-25 17:52:25.200718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.897 [2024-10-25 17:52:25.200726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.897 [2024-10-25 17:52:25.200736] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.897 [2024-10-25 17:52:25.200744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.897 [2024-10-25 17:52:25.200754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.897 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.467 [2024-10-25 17:52:25.598903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:07.467 [2024-10-25 17:52:25.600594] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.467 [2024-10-25 17:52:25.600629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.467 [2024-10-25 17:52:25.600644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.467 [2024-10-25 17:52:25.600663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.467 [2024-10-25 17:52:25.600673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.467 [2024-10-25 17:52:25.600682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.467 [2024-10-25 17:52:25.600694] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.467 [2024-10-25 17:52:25.600702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.467 [2024-10-25 17:52:25.600714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.467 [2024-10-25 17:52:25.600723] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.467 [2024-10-25 17:52:25.600733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.467 [2024-10-25 17:52:25.600741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.467 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:07.467 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.467 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.467 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.467 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.467 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.467 17:52:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.467 17:52:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.468 17:52:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.468 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.468 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.468 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.468 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.468 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.727 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.727 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.727 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.727 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.727 17:52:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.727 17:52:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.727 17:52:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.727 17:52:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.26 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.26 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.26 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.26 2 00:11:19.948 remove_attach_helper took 45.26s to complete (handling 2 nvme drive(s)) 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:19.948 17:52:38 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:19.948 17:52:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.508 17:52:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.508 17:52:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.508 17:52:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.508 [2024-10-25 17:52:44.191297] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.508 [2024-10-25 17:52:44.192409] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.192448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.192460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 [2024-10-25 17:52:44.192480] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.192487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.192496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 [2024-10-25 17:52:44.192504] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.192512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.192518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 [2024-10-25 17:52:44.192526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.192533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.192543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.508 17:52:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.508 17:52:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.508 [2024-10-25 17:52:44.691306] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.508 [2024-10-25 17:52:44.692384] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.692418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.692430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 [2024-10-25 17:52:44.692446] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.692456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.692463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 [2024-10-25 17:52:44.692472] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.692479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.692487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 [2024-10-25 17:52:44.692495] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.508 [2024-10-25 17:52:44.692503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.508 [2024-10-25 17:52:44.692509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.508 17:52:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.508 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.766 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.766 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.766 17:52:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:38.961 17:52:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:38.961 17:52:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:38.961 17:52:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:38.961 17:52:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.961 17:52:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.961 17:52:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.961 17:52:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.961 17:52:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.961 17:52:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.961 17:52:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.961 17:52:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.961 17:52:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.961 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.961 [2024-10-25 17:52:57.091543] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.961 [2024-10-25 17:52:57.094548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.961 [2024-10-25 17:52:57.094627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.961 [2024-10-25 17:52:57.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.961 [2024-10-25 17:52:57.094660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.961 [2024-10-25 17:52:57.094667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.961 [2024-10-25 17:52:57.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.961 [2024-10-25 17:52:57.094684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.961 [2024-10-25 17:52:57.094692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.961 [2024-10-25 17:52:57.094699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.961 [2024-10-25 17:52:57.094707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.961 [2024-10-25 17:52:57.094714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.961 [2024-10-25 17:52:57.094722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.260 17:52:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.260 17:52:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.260 17:52:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.260 17:52:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.517 [2024-10-25 17:52:57.691540] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:39.517 [2024-10-25 17:52:57.692622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.517 [2024-10-25 17:52:57.692679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.517 [2024-10-25 17:52:57.692693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.517 [2024-10-25 17:52:57.692710] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.517 [2024-10-25 17:52:57.692721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.517 [2024-10-25 17:52:57.692729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.517 [2024-10-25 17:52:57.692737] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.517 [2024-10-25 17:52:57.692745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.517 [2024-10-25 17:52:57.692756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.517 [2024-10-25 17:52:57.692764] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.517 [2024-10-25 17:52:57.692772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.517 [2024-10-25 17:52:57.692779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.775 17:52:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.775 17:52:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 17:52:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.775 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.032 17:52:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.224 17:53:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.224 17:53:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.224 17:53:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.224 17:53:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.224 17:53:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.224 17:53:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:52.224 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.224 [2024-10-25 17:53:10.491788] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.224 [2024-10-25 17:53:10.492870] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.224 [2024-10-25 17:53:10.492909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.224 [2024-10-25 17:53:10.492929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.224 [2024-10-25 17:53:10.492948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.224 [2024-10-25 17:53:10.492956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.224 [2024-10-25 17:53:10.492965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.224 [2024-10-25 17:53:10.492973] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.224 [2024-10-25 17:53:10.492983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.224 [2024-10-25 17:53:10.492990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.224 [2024-10-25 17:53:10.492999] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.224 [2024-10-25 17:53:10.493006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.224 [2024-10-25 17:53:10.493013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.790 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.790 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.790 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.790 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.790 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.790 17:53:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.790 17:53:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.790 17:53:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.790 17:53:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.790 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:52.790 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.790 [2024-10-25 17:53:11.191810] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.790 [2024-10-25 17:53:11.192880] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.790 [2024-10-25 17:53:11.192910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.790 [2024-10-25 17:53:11.192929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.790 [2024-10-25 17:53:11.192946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.790 [2024-10-25 17:53:11.192954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.790 [2024-10-25 17:53:11.192962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.790 [2024-10-25 17:53:11.192971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.790 [2024-10-25 17:53:11.192978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.790 [2024-10-25 17:53:11.192986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.790 [2024-10-25 17:53:11.192993] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.790 [2024-10-25 17:53:11.193004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.790 [2024-10-25 17:53:11.193010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.355 17:53:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.355 17:53:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.355 17:53:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.355 17:53:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.544 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.544 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.544 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.72 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.72 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:12:05.545 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:05.545 17:53:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67170 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67170 ']' 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67170 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67170 00:12:05.545 killing process with pid 67170 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67170' 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67170 00:12:05.545 17:53:23 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67170 00:12:06.917 17:53:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:06.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.483 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:07.483 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:07.483 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:07.483 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:07.483 00:12:07.483 real 2m30.695s 00:12:07.483 user 1m51.440s 00:12:07.483 sys 0m17.836s 00:12:07.483 17:53:25 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.483 17:53:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.483 ************************************ 00:12:07.483 END TEST sw_hotplug 00:12:07.483 ************************************ 00:12:07.483 17:53:25 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:07.484 17:53:25 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:07.484 17:53:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:07.484 17:53:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.484 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:12:07.484 ************************************ 00:12:07.484 START TEST nvme_xnvme 00:12:07.484 ************************************ 00:12:07.484 17:53:25 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:07.741 * Looking for test storage... 00:12:07.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:07.741 17:53:25 nvme_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:07.741 17:53:25 nvme_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:12:07.741 17:53:25 nvme_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:07.741 17:53:26 nvme_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:07.741 17:53:26 nvme_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.741 17:53:26 nvme_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.741 --rc genhtml_branch_coverage=1 00:12:07.741 --rc genhtml_function_coverage=1 00:12:07.741 --rc genhtml_legend=1 00:12:07.741 --rc geninfo_all_blocks=1 00:12:07.741 --rc geninfo_unexecuted_blocks=1 00:12:07.741 00:12:07.741 ' 00:12:07.741 17:53:26 nvme_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.741 --rc genhtml_branch_coverage=1 00:12:07.741 --rc genhtml_function_coverage=1 00:12:07.741 --rc genhtml_legend=1 00:12:07.741 --rc geninfo_all_blocks=1 00:12:07.741 --rc geninfo_unexecuted_blocks=1 00:12:07.741 00:12:07.741 ' 00:12:07.741 17:53:26 nvme_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.741 --rc genhtml_branch_coverage=1 00:12:07.741 --rc genhtml_function_coverage=1 00:12:07.741 --rc genhtml_legend=1 00:12:07.741 --rc geninfo_all_blocks=1 00:12:07.741 --rc geninfo_unexecuted_blocks=1 00:12:07.741 00:12:07.741 ' 00:12:07.741 17:53:26 nvme_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:07.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.741 --rc genhtml_branch_coverage=1 00:12:07.741 --rc genhtml_function_coverage=1 00:12:07.741 --rc genhtml_legend=1 00:12:07.741 --rc geninfo_all_blocks=1 00:12:07.741 --rc geninfo_unexecuted_blocks=1 00:12:07.741 00:12:07.741 ' 00:12:07.741 17:53:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.741 17:53:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.741 17:53:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.742 17:53:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.742 17:53:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.742 17:53:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:07.742 17:53:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.742 17:53:26 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:07.742 17:53:26 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:07.742 17:53:26 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.742 17:53:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.742 ************************************ 00:12:07.742 START TEST xnvme_to_malloc_dd_copy 00:12:07.742 ************************************ 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:07.742 17:53:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:07.742 { 00:12:07.742 "subsystems": [ 00:12:07.742 { 00:12:07.742 "subsystem": "bdev", 00:12:07.742 "config": [ 00:12:07.742 { 00:12:07.742 "params": { 00:12:07.742 "block_size": 512, 00:12:07.742 "num_blocks": 2097152, 00:12:07.742 "name": "malloc0" 00:12:07.742 }, 00:12:07.742 "method": "bdev_malloc_create" 00:12:07.742 }, 00:12:07.742 { 00:12:07.742 "params": { 00:12:07.742 "io_mechanism": "libaio", 00:12:07.742 "filename": "/dev/nullb0", 00:12:07.742 "name": "null0" 00:12:07.742 }, 00:12:07.742 "method": "bdev_xnvme_create" 00:12:07.742 }, 00:12:07.742 { 00:12:07.742 "method": "bdev_wait_for_examine" 00:12:07.742 } 00:12:07.742 ] 00:12:07.742 } 00:12:07.742 ] 00:12:07.742 } 00:12:07.742 [2024-10-25 17:53:26.118025] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:07.742 [2024-10-25 17:53:26.118118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68560 ] 00:12:08.000 [2024-10-25 17:53:26.272656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.000 [2024-10-25 17:53:26.374062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.545  [2024-10-25T17:53:29.547Z] Copying: 227/1024 [MB] (227 MBps) [2024-10-25T17:53:30.480Z] Copying: 451/1024 [MB] (224 MBps) [2024-10-25T17:53:31.413Z] Copying: 700/1024 [MB] (248 MBps) [2024-10-25T17:53:31.670Z] Copying: 975/1024 [MB] (275 MBps) [2024-10-25T17:53:34.200Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:12:15.765 00:12:15.765 17:53:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:15.765 17:53:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:15.765 17:53:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:15.765 17:53:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:15.765 { 00:12:15.765 "subsystems": [ 00:12:15.765 { 00:12:15.765 "subsystem": "bdev", 00:12:15.765 "config": [ 00:12:15.765 { 00:12:15.765 "params": { 00:12:15.765 "block_size": 512, 00:12:15.765 "num_blocks": 2097152, 00:12:15.765 "name": "malloc0" 00:12:15.765 }, 00:12:15.765 "method": "bdev_malloc_create" 00:12:15.765 }, 00:12:15.765 { 00:12:15.765 "params": { 00:12:15.765 "io_mechanism": "libaio", 00:12:15.765 "filename": "/dev/nullb0", 00:12:15.765 "name": "null0" 00:12:15.765 }, 00:12:15.765 "method": "bdev_xnvme_create" 00:12:15.765 }, 00:12:15.765 { 00:12:15.765 "method": "bdev_wait_for_examine" 00:12:15.765 } 00:12:15.765 ] 00:12:15.765 } 00:12:15.765 ] 00:12:15.765 } 00:12:15.765 [2024-10-25 17:53:33.662895] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:15.765 [2024-10-25 17:53:33.663016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68655 ] 00:12:15.765 [2024-10-25 17:53:33.823793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.765 [2024-10-25 17:53:33.925837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.665  [2024-10-25T17:53:37.077Z] Copying: 278/1024 [MB] (278 MBps) [2024-10-25T17:53:38.010Z] Copying: 572/1024 [MB] (293 MBps) [2024-10-25T17:53:38.576Z] Copying: 864/1024 [MB] (292 MBps) [2024-10-25T17:53:41.108Z] Copying: 1024/1024 [MB] (average 289 MBps) 00:12:22.673 00:12:22.673 17:53:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:22.673 17:53:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:22.673 17:53:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:22.673 17:53:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:22.673 17:53:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:22.673 17:53:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:22.673 { 00:12:22.673 "subsystems": [ 00:12:22.673 { 00:12:22.673 "subsystem": "bdev", 00:12:22.673 "config": [ 00:12:22.673 { 00:12:22.673 "params": { 00:12:22.673 "block_size": 512, 00:12:22.673 "num_blocks": 2097152, 00:12:22.673 "name": "malloc0" 00:12:22.673 }, 00:12:22.673 "method": "bdev_malloc_create" 00:12:22.673 }, 00:12:22.673 { 00:12:22.673 "params": { 00:12:22.673 "io_mechanism": "io_uring", 00:12:22.673 "filename": "/dev/nullb0", 00:12:22.673 "name": "null0" 00:12:22.673 }, 00:12:22.673 "method": "bdev_xnvme_create" 00:12:22.673 }, 00:12:22.673 { 00:12:22.673 "method": "bdev_wait_for_examine" 00:12:22.673 } 00:12:22.673 ] 00:12:22.673 } 00:12:22.673 ] 00:12:22.673 } 00:12:22.673 [2024-10-25 17:53:40.570246] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:22.673 [2024-10-25 17:53:40.570399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68731 ] 00:12:22.673 [2024-10-25 17:53:40.729461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.673 [2024-10-25 17:53:40.832666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.575  [2024-10-25T17:53:43.942Z] Copying: 237/1024 [MB] (237 MBps) [2024-10-25T17:53:44.876Z] Copying: 509/1024 [MB] (272 MBps) [2024-10-25T17:53:45.809Z] Copying: 809/1024 [MB] (299 MBps) [2024-10-25T17:53:47.710Z] Copying: 1024/1024 [MB] (average 275 MBps) 00:12:29.275 00:12:29.275 17:53:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:29.275 17:53:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:29.275 17:53:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:29.275 17:53:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.275 { 00:12:29.275 "subsystems": [ 00:12:29.275 { 00:12:29.275 "subsystem": "bdev", 00:12:29.275 "config": [ 00:12:29.275 { 00:12:29.275 "params": { 00:12:29.275 "block_size": 512, 00:12:29.275 "num_blocks": 2097152, 00:12:29.275 "name": "malloc0" 00:12:29.275 }, 00:12:29.275 "method": "bdev_malloc_create" 00:12:29.275 }, 00:12:29.275 { 00:12:29.275 "params": { 00:12:29.275 "io_mechanism": "io_uring", 00:12:29.275 "filename": "/dev/nullb0", 00:12:29.275 "name": "null0" 00:12:29.275 }, 00:12:29.275 "method": "bdev_xnvme_create" 00:12:29.275 }, 00:12:29.275 { 00:12:29.275 "method": "bdev_wait_for_examine" 00:12:29.275 } 00:12:29.275 ] 00:12:29.275 } 00:12:29.275 ] 00:12:29.275 } 00:12:29.275 [2024-10-25 17:53:47.566466] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:29.275 [2024-10-25 17:53:47.566605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68817 ] 00:12:29.533 [2024-10-25 17:53:47.724034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.534 [2024-10-25 17:53:47.810936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.433  [2024-10-25T17:53:50.799Z] Copying: 299/1024 [MB] (299 MBps) [2024-10-25T17:53:51.733Z] Copying: 602/1024 [MB] (302 MBps) [2024-10-25T17:53:51.992Z] Copying: 911/1024 [MB] (308 MBps) [2024-10-25T17:53:53.892Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:12:35.457 00:12:35.457 17:53:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:35.457 17:53:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:35.715 00:12:35.715 real 0m27.863s 00:12:35.715 user 0m24.641s 00:12:35.715 sys 0m2.674s 00:12:35.715 17:53:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.715 17:53:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 ************************************ 00:12:35.715 END TEST xnvme_to_malloc_dd_copy 00:12:35.716 ************************************ 00:12:35.716 17:53:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:35.716 17:53:53 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:35.716 17:53:53 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.716 17:53:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.716 ************************************ 00:12:35.716 START TEST xnvme_bdevperf 00:12:35.716 ************************************ 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:35.716 17:53:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:35.716 { 00:12:35.716 "subsystems": [ 00:12:35.716 { 00:12:35.716 "subsystem": "bdev", 00:12:35.716 "config": [ 00:12:35.716 { 00:12:35.716 "params": { 00:12:35.716 "io_mechanism": "libaio", 00:12:35.716 "filename": "/dev/nullb0", 00:12:35.716 "name": "null0" 00:12:35.716 }, 00:12:35.716 "method": "bdev_xnvme_create" 00:12:35.716 }, 00:12:35.716 { 00:12:35.716 "method": "bdev_wait_for_examine" 00:12:35.716 } 00:12:35.716 ] 00:12:35.716 } 00:12:35.716 ] 00:12:35.716 } 00:12:35.716 [2024-10-25 17:53:54.006356] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:35.716 [2024-10-25 17:53:54.006477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68918 ] 00:12:35.975 [2024-10-25 17:53:54.157771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.975 [2024-10-25 17:53:54.255932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.232 Running I/O for 5 seconds... 00:12:38.101 154816.00 IOPS, 604.75 MiB/s [2024-10-25T17:53:57.910Z] 158560.00 IOPS, 619.38 MiB/s [2024-10-25T17:53:58.862Z] 172586.67 IOPS, 674.17 MiB/s [2024-10-25T17:53:59.796Z] 179632.00 IOPS, 701.69 MiB/s 00:12:41.361 Latency(us) 00:12:41.361 [2024-10-25T17:53:59.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.361 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:41.361 null0 : 5.00 183082.40 715.17 0.00 0.00 347.07 112.64 2281.16 00:12:41.361 [2024-10-25T17:53:59.796Z] =================================================================================================================== 00:12:41.361 [2024-10-25T17:53:59.796Z] Total : 183082.40 715.17 0.00 0.00 347.07 112.64 2281.16 00:12:41.928 17:54:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:41.928 17:54:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:41.928 17:54:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:41.928 17:54:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:41.928 17:54:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:41.928 17:54:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:41.928 { 00:12:41.928 "subsystems": [ 00:12:41.928 { 00:12:41.928 "subsystem": "bdev", 00:12:41.928 "config": [ 00:12:41.928 { 00:12:41.928 "params": { 00:12:41.928 "io_mechanism": "io_uring", 00:12:41.928 "filename": "/dev/nullb0", 00:12:41.928 "name": "null0" 00:12:41.928 }, 00:12:41.928 "method": "bdev_xnvme_create" 00:12:41.928 }, 00:12:41.928 { 00:12:41.928 "method": "bdev_wait_for_examine" 00:12:41.928 } 00:12:41.928 ] 00:12:41.928 } 00:12:41.928 ] 00:12:41.928 } 00:12:41.928 [2024-10-25 17:54:00.130614] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:41.928 [2024-10-25 17:54:00.130714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68993 ] 00:12:41.928 [2024-10-25 17:54:00.271544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.928 [2024-10-25 17:54:00.357038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.189 Running I/O for 5 seconds... 00:12:44.136 176128.00 IOPS, 688.00 MiB/s [2024-10-25T17:54:03.953Z] 176000.00 IOPS, 687.50 MiB/s [2024-10-25T17:54:04.889Z] 175893.33 IOPS, 687.08 MiB/s [2024-10-25T17:54:05.824Z] 180704.00 IOPS, 705.88 MiB/s 00:12:47.389 Latency(us) 00:12:47.389 [2024-10-25T17:54:05.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.389 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:47.389 null0 : 5.00 189396.67 739.83 0.00 0.00 335.09 155.18 2003.89 00:12:47.389 [2024-10-25T17:54:05.824Z] =================================================================================================================== 00:12:47.389 [2024-10-25T17:54:05.824Z] Total : 189396.67 739.83 0.00 0.00 335.09 155.18 2003.89 00:12:47.955 17:54:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:47.955 17:54:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:47.955 00:12:47.955 real 0m12.252s 00:12:47.955 user 0m9.875s 00:12:47.955 sys 0m2.129s 00:12:47.955 17:54:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.955 17:54:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:47.955 ************************************ 00:12:47.955 END TEST xnvme_bdevperf 00:12:47.955 ************************************ 00:12:47.955 00:12:47.955 real 0m40.352s 00:12:47.955 user 0m34.645s 00:12:47.955 sys 0m4.901s 00:12:47.955 17:54:06 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.955 ************************************ 00:12:47.955 END TEST nvme_xnvme 00:12:47.955 ************************************ 00:12:47.955 17:54:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.955 17:54:06 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:47.955 17:54:06 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:47.955 17:54:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.955 17:54:06 -- common/autotest_common.sh@10 -- # set +x 00:12:47.955 ************************************ 00:12:47.955 START TEST blockdev_xnvme 00:12:47.955 ************************************ 00:12:47.955 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:47.955 * Looking for test storage... 00:12:47.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:47.955 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:47.955 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:12:47.955 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:48.215 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.215 17:54:06 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:48.215 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.215 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:48.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.215 --rc genhtml_branch_coverage=1 00:12:48.215 --rc genhtml_function_coverage=1 00:12:48.215 --rc genhtml_legend=1 00:12:48.215 --rc geninfo_all_blocks=1 00:12:48.215 --rc geninfo_unexecuted_blocks=1 00:12:48.215 00:12:48.215 ' 00:12:48.215 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:48.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.215 --rc genhtml_branch_coverage=1 00:12:48.215 --rc genhtml_function_coverage=1 00:12:48.215 --rc genhtml_legend=1 00:12:48.215 --rc geninfo_all_blocks=1 00:12:48.215 --rc geninfo_unexecuted_blocks=1 00:12:48.215 00:12:48.215 ' 00:12:48.215 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:48.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.215 --rc genhtml_branch_coverage=1 00:12:48.215 --rc genhtml_function_coverage=1 00:12:48.215 --rc genhtml_legend=1 00:12:48.215 --rc geninfo_all_blocks=1 00:12:48.215 --rc geninfo_unexecuted_blocks=1 00:12:48.215 00:12:48.215 ' 00:12:48.215 17:54:06 blockdev_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:48.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.215 --rc genhtml_branch_coverage=1 00:12:48.215 --rc genhtml_function_coverage=1 00:12:48.215 --rc genhtml_legend=1 00:12:48.215 --rc geninfo_all_blocks=1 00:12:48.215 --rc geninfo_unexecuted_blocks=1 00:12:48.215 00:12:48.215 ' 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:48.215 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69135 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69135 00:12:48.216 17:54:06 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69135 ']' 00:12:48.216 17:54:06 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.216 17:54:06 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:48.216 17:54:06 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.216 17:54:06 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.216 17:54:06 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.216 17:54:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.216 [2024-10-25 17:54:06.505688] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:48.216 [2024-10-25 17:54:06.505792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69135 ] 00:12:48.476 [2024-10-25 17:54:06.670094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.476 [2024-10-25 17:54:06.775837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.043 17:54:07 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.043 17:54:07 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:49.043 17:54:07 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:49.043 17:54:07 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:49.043 17:54:07 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:49.043 17:54:07 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:49.043 17:54:07 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:49.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:49.562 Waiting for block devices as requested 00:12:49.562 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.562 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.821 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.822 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.086 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:55.086 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1654 -- # local nvme bdf 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:12:55.086 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:55.087 nvme0n1 00:12:55.087 nvme1n1 00:12:55.087 nvme2n1 00:12:55.087 nvme2n2 00:12:55.087 nvme2n3 00:12:55.087 nvme3n1 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:55.087 17:54:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:55.087 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:55.088 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "752358f5-3812-46a0-86ed-e6942ac3b20c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "752358f5-3812-46a0-86ed-e6942ac3b20c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2e689128-84aa-4268-b335-b74a8426c0d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2e689128-84aa-4268-b335-b74a8426c0d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bc720e05-a127-4937-9faf-565e52a4edbb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc720e05-a127-4937-9faf-565e52a4edbb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "014e220c-a8a4-4576-b0db-89ee116f08da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "014e220c-a8a4-4576-b0db-89ee116f08da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c11d585f-95ad-4005-b6b6-12ad7b567503"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c11d585f-95ad-4005-b6b6-12ad7b567503",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "21ff9e6b-e8cb-4efc-a63e-8d4052bd0909"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21ff9e6b-e8cb-4efc-a63e-8d4052bd0909",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:55.088 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:55.088 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:55.088 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:55.088 17:54:13 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69135 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69135 ']' 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69135 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69135 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.088 killing process with pid 69135 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69135' 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69135 00:12:55.088 17:54:13 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69135 00:12:56.462 17:54:14 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:56.462 17:54:14 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:56.462 17:54:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:56.462 17:54:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.462 17:54:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.462 ************************************ 00:12:56.462 START TEST bdev_hello_world 00:12:56.462 ************************************ 00:12:56.462 17:54:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:56.462 [2024-10-25 17:54:14.716192] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:56.462 [2024-10-25 17:54:14.716318] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69492 ] 00:12:56.462 [2024-10-25 17:54:14.879107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.721 [2024-10-25 17:54:14.980724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.978 [2024-10-25 17:54:15.313281] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:56.978 [2024-10-25 17:54:15.313332] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:56.978 [2024-10-25 17:54:15.313352] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:56.978 [2024-10-25 17:54:15.315259] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:56.978 [2024-10-25 17:54:15.315494] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:56.978 [2024-10-25 17:54:15.315513] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:56.978 [2024-10-25 17:54:15.315661] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:56.978 00:12:56.978 [2024-10-25 17:54:15.315681] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:57.911 00:12:57.911 real 0m1.366s 00:12:57.911 user 0m1.084s 00:12:57.911 sys 0m0.168s 00:12:57.911 17:54:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.911 17:54:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:57.911 ************************************ 00:12:57.911 END TEST bdev_hello_world 00:12:57.911 ************************************ 00:12:57.911 17:54:16 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:57.911 17:54:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:57.911 17:54:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.911 17:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.911 ************************************ 00:12:57.911 START TEST bdev_bounds 00:12:57.911 ************************************ 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69523 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:57.911 Process bdevio pid: 69523 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69523' 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69523 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69523 ']' 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:57.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:57.911 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:57.911 [2024-10-25 17:54:16.123586] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:57.911 [2024-10-25 17:54:16.123714] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69523 ] 00:12:57.911 [2024-10-25 17:54:16.282099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.169 [2024-10-25 17:54:16.387416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.169 [2024-10-25 17:54:16.387490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.169 [2024-10-25 17:54:16.387520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.735 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.735 17:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:58.735 17:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:58.735 I/O targets: 00:12:58.735 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:58.735 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:58.735 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.735 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.735 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.735 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:58.735 00:12:58.735 00:12:58.735 CUnit - A unit testing framework for C - Version 2.1-3 00:12:58.735 http://cunit.sourceforge.net/ 00:12:58.735 00:12:58.735 00:12:58.735 Suite: bdevio tests on: nvme3n1 00:12:58.735 Test: blockdev write read block ...passed 00:12:58.735 Test: blockdev write zeroes read block ...passed 00:12:58.735 Test: blockdev write zeroes read no split ...passed 00:12:58.735 Test: blockdev write zeroes read split ...passed 00:12:58.735 Test: blockdev write zeroes read split partial ...passed 00:12:58.735 Test: blockdev reset ...passed 00:12:58.735 Test: blockdev write read 8 blocks ...passed 00:12:58.735 Test: blockdev write read size > 128k ...passed 00:12:58.735 Test: blockdev write read invalid size ...passed 00:12:58.735 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.735 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.735 Test: blockdev write read max offset ...passed 00:12:58.735 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.735 Test: blockdev writev readv 8 blocks ...passed 00:12:58.735 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.735 Test: blockdev writev readv block ...passed 00:12:58.735 Test: blockdev writev readv size > 128k ...passed 00:12:58.735 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.735 Test: blockdev comparev and writev ...passed 00:12:58.735 Test: blockdev nvme passthru rw ...passed 00:12:58.735 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.735 Test: blockdev nvme admin passthru ...passed 00:12:58.735 Test: blockdev copy ...passed 00:12:58.735 Suite: bdevio tests on: nvme2n3 00:12:58.735 Test: blockdev write read block ...passed 00:12:58.735 Test: blockdev write zeroes read block ...passed 00:12:58.735 Test: blockdev write zeroes read no split ...passed 00:12:58.735 Test: blockdev write zeroes read split ...passed 00:12:58.994 Test: blockdev write zeroes read split partial ...passed 00:12:58.994 Test: blockdev reset ...passed 00:12:58.994 Test: blockdev write read 8 blocks ...passed 00:12:58.994 Test: blockdev write read size > 128k ...passed 00:12:58.994 Test: blockdev write read invalid size ...passed 00:12:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.994 Test: blockdev write read max offset ...passed 00:12:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.994 Test: blockdev writev readv 8 blocks ...passed 00:12:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.994 Test: blockdev writev readv block ...passed 00:12:58.994 Test: blockdev writev readv size > 128k ...passed 00:12:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.994 Test: blockdev comparev and writev ...passed 00:12:58.994 Test: blockdev nvme passthru rw ...passed 00:12:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.994 Test: blockdev nvme admin passthru ...passed 00:12:58.994 Test: blockdev copy ...passed 00:12:58.994 Suite: bdevio tests on: nvme2n2 00:12:58.994 Test: blockdev write read block ...passed 00:12:58.994 Test: blockdev write zeroes read block ...passed 00:12:58.994 Test: blockdev write zeroes read no split ...passed 00:12:58.994 Test: blockdev write zeroes read split ...passed 00:12:58.994 Test: blockdev write zeroes read split partial ...passed 00:12:58.994 Test: blockdev reset ...passed 00:12:58.994 Test: blockdev write read 8 blocks ...passed 00:12:58.994 Test: blockdev write read size > 128k ...passed 00:12:58.994 Test: blockdev write read invalid size ...passed 00:12:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.994 Test: blockdev write read max offset ...passed 00:12:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.994 Test: blockdev writev readv 8 blocks ...passed 00:12:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.994 Test: blockdev writev readv block ...passed 00:12:58.994 Test: blockdev writev readv size > 128k ...passed 00:12:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.994 Test: blockdev comparev and writev ...passed 00:12:58.994 Test: blockdev nvme passthru rw ...passed 00:12:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.994 Test: blockdev nvme admin passthru ...passed 00:12:58.994 Test: blockdev copy ...passed 00:12:58.994 Suite: bdevio tests on: nvme2n1 00:12:58.994 Test: blockdev write read block ...passed 00:12:58.994 Test: blockdev write zeroes read block ...passed 00:12:58.994 Test: blockdev write zeroes read no split ...passed 00:12:58.994 Test: blockdev write zeroes read split ...passed 00:12:58.994 Test: blockdev write zeroes read split partial ...passed 00:12:58.994 Test: blockdev reset ...passed 00:12:58.994 Test: blockdev write read 8 blocks ...passed 00:12:58.994 Test: blockdev write read size > 128k ...passed 00:12:58.994 Test: blockdev write read invalid size ...passed 00:12:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.994 Test: blockdev write read max offset ...passed 00:12:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.994 Test: blockdev writev readv 8 blocks ...passed 00:12:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.994 Test: blockdev writev readv block ...passed 00:12:58.994 Test: blockdev writev readv size > 128k ...passed 00:12:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.994 Test: blockdev comparev and writev ...passed 00:12:58.994 Test: blockdev nvme passthru rw ...passed 00:12:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.994 Test: blockdev nvme admin passthru ...passed 00:12:58.994 Test: blockdev copy ...passed 00:12:58.994 Suite: bdevio tests on: nvme1n1 00:12:58.994 Test: blockdev write read block ...passed 00:12:58.994 Test: blockdev write zeroes read block ...passed 00:12:58.994 Test: blockdev write zeroes read no split ...passed 00:12:58.994 Test: blockdev write zeroes read split ...passed 00:12:58.994 Test: blockdev write zeroes read split partial ...passed 00:12:58.994 Test: blockdev reset ...passed 00:12:58.994 Test: blockdev write read 8 blocks ...passed 00:12:58.994 Test: blockdev write read size > 128k ...passed 00:12:58.994 Test: blockdev write read invalid size ...passed 00:12:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.994 Test: blockdev write read max offset ...passed 00:12:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.994 Test: blockdev writev readv 8 blocks ...passed 00:12:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.994 Test: blockdev writev readv block ...passed 00:12:58.994 Test: blockdev writev readv size > 128k ...passed 00:12:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.994 Test: blockdev comparev and writev ...passed 00:12:58.994 Test: blockdev nvme passthru rw ...passed 00:12:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.994 Test: blockdev nvme admin passthru ...passed 00:12:58.994 Test: blockdev copy ...passed 00:12:58.994 Suite: bdevio tests on: nvme0n1 00:12:58.994 Test: blockdev write read block ...passed 00:12:58.994 Test: blockdev write zeroes read block ...passed 00:12:58.994 Test: blockdev write zeroes read no split ...passed 00:12:58.994 Test: blockdev write zeroes read split ...passed 00:12:58.994 Test: blockdev write zeroes read split partial ...passed 00:12:58.994 Test: blockdev reset ...passed 00:12:58.994 Test: blockdev write read 8 blocks ...passed 00:12:58.994 Test: blockdev write read size > 128k ...passed 00:12:58.994 Test: blockdev write read invalid size ...passed 00:12:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.994 Test: blockdev write read max offset ...passed 00:12:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.994 Test: blockdev writev readv 8 blocks ...passed 00:12:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.994 Test: blockdev writev readv block ...passed 00:12:58.994 Test: blockdev writev readv size > 128k ...passed 00:12:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.995 Test: blockdev comparev and writev ...passed 00:12:58.995 Test: blockdev nvme passthru rw ...passed 00:12:58.995 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.995 Test: blockdev nvme admin passthru ...passed 00:12:58.995 Test: blockdev copy ...passed 00:12:58.995 00:12:58.995 Run Summary: Type Total Ran Passed Failed Inactive 00:12:58.995 suites 6 6 n/a 0 0 00:12:58.995 tests 138 138 138 0 0 00:12:58.995 asserts 780 780 780 0 n/a 00:12:58.995 00:12:58.995 Elapsed time = 0.903 seconds 00:12:58.995 0 00:12:58.995 17:54:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69523 00:12:58.995 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69523 ']' 00:12:58.995 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69523 00:12:58.995 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:58.995 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.995 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69523 00:12:59.254 killing process with pid 69523 00:12:59.254 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.254 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.254 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69523' 00:12:59.254 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69523 00:12:59.254 17:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69523 00:12:59.821 17:54:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:59.821 00:12:59.821 real 0m2.037s 00:12:59.821 user 0m5.142s 00:12:59.821 sys 0m0.312s 00:12:59.821 17:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.821 17:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:59.821 ************************************ 00:12:59.821 END TEST bdev_bounds 00:12:59.821 ************************************ 00:12:59.821 17:54:18 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:59.821 17:54:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:59.821 17:54:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.821 17:54:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.821 ************************************ 00:12:59.821 START TEST bdev_nbd 00:12:59.821 ************************************ 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69580 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69580 /var/tmp/spdk-nbd.sock 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69580 ']' 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.821 17:54:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:59.822 17:54:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:59.822 [2024-10-25 17:54:18.208978] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:12:59.822 [2024-10-25 17:54:18.209144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.090 [2024-10-25 17:54:18.363220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.090 [2024-10-25 17:54:18.452691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:00.677 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.935 1+0 records in 00:13:00.935 1+0 records out 00:13:00.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306111 s, 13.4 MB/s 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:00.935 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.192 1+0 records in 00:13:01.192 1+0 records out 00:13:01.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647684 s, 6.3 MB/s 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.192 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:01.193 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.193 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.193 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.450 1+0 records in 00:13:01.450 1+0 records out 00:13:01.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031921 s, 12.8 MB/s 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.450 17:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.708 1+0 records in 00:13:01.708 1+0 records out 00:13:01.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631796 s, 6.5 MB/s 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.708 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.966 1+0 records in 00:13:01.966 1+0 records out 00:13:01.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399166 s, 10.3 MB/s 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.966 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.225 1+0 records in 00:13:02.225 1+0 records out 00:13:02.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421841 s, 9.7 MB/s 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.225 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd0", 00:13:02.483 "bdev_name": "nvme0n1" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd1", 00:13:02.483 "bdev_name": "nvme1n1" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd2", 00:13:02.483 "bdev_name": "nvme2n1" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd3", 00:13:02.483 "bdev_name": "nvme2n2" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd4", 00:13:02.483 "bdev_name": "nvme2n3" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd5", 00:13:02.483 "bdev_name": "nvme3n1" 00:13:02.483 } 00:13:02.483 ]' 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd0", 00:13:02.483 "bdev_name": "nvme0n1" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd1", 00:13:02.483 "bdev_name": "nvme1n1" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd2", 00:13:02.483 "bdev_name": "nvme2n1" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd3", 00:13:02.483 "bdev_name": "nvme2n2" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd4", 00:13:02.483 "bdev_name": "nvme2n3" 00:13:02.483 }, 00:13:02.483 { 00:13:02.483 "nbd_device": "/dev/nbd5", 00:13:02.483 "bdev_name": "nvme3n1" 00:13:02.483 } 00:13:02.483 ]' 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.483 17:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.740 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.997 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.255 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.514 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:03.772 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:03.772 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:03.772 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:03.772 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.772 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.772 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:03.773 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.773 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.773 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.773 17:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.773 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.031 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.031 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.032 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:04.290 /dev/nbd0 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.290 1+0 records in 00:13:04.290 1+0 records out 00:13:04.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570988 s, 7.2 MB/s 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.290 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:04.547 /dev/nbd1 00:13:04.547 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:04.547 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.548 1+0 records in 00:13:04.548 1+0 records out 00:13:04.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346494 s, 11.8 MB/s 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.548 17:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:04.806 /dev/nbd10 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.806 1+0 records in 00:13:04.806 1+0 records out 00:13:04.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489004 s, 8.4 MB/s 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.806 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:05.064 /dev/nbd11 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.064 1+0 records in 00:13:05.064 1+0 records out 00:13:05.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699862 s, 5.9 MB/s 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.064 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:05.322 /dev/nbd12 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.322 1+0 records in 00:13:05.322 1+0 records out 00:13:05.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447607 s, 9.2 MB/s 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.322 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:05.581 /dev/nbd13 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.581 1+0 records in 00:13:05.581 1+0 records out 00:13:05.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506515 s, 8.1 MB/s 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.581 17:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd0", 00:13:05.840 "bdev_name": "nvme0n1" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd1", 00:13:05.840 "bdev_name": "nvme1n1" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd10", 00:13:05.840 "bdev_name": "nvme2n1" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd11", 00:13:05.840 "bdev_name": "nvme2n2" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd12", 00:13:05.840 "bdev_name": "nvme2n3" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd13", 00:13:05.840 "bdev_name": "nvme3n1" 00:13:05.840 } 00:13:05.840 ]' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd0", 00:13:05.840 "bdev_name": "nvme0n1" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd1", 00:13:05.840 "bdev_name": "nvme1n1" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd10", 00:13:05.840 "bdev_name": "nvme2n1" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd11", 00:13:05.840 "bdev_name": "nvme2n2" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd12", 00:13:05.840 "bdev_name": "nvme2n3" 00:13:05.840 }, 00:13:05.840 { 00:13:05.840 "nbd_device": "/dev/nbd13", 00:13:05.840 "bdev_name": "nvme3n1" 00:13:05.840 } 00:13:05.840 ]' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:05.840 /dev/nbd1 00:13:05.840 /dev/nbd10 00:13:05.840 /dev/nbd11 00:13:05.840 /dev/nbd12 00:13:05.840 /dev/nbd13' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:05.840 /dev/nbd1 00:13:05.840 /dev/nbd10 00:13:05.840 /dev/nbd11 00:13:05.840 /dev/nbd12 00:13:05.840 /dev/nbd13' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:05.840 256+0 records in 00:13:05.840 256+0 records out 00:13:05.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691109 s, 152 MB/s 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:05.840 256+0 records in 00:13:05.840 256+0 records out 00:13:05.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0596449 s, 17.6 MB/s 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:05.840 256+0 records in 00:13:05.840 256+0 records out 00:13:05.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0623328 s, 16.8 MB/s 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:05.840 256+0 records in 00:13:05.840 256+0 records out 00:13:05.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0580174 s, 18.1 MB/s 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.840 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:06.099 256+0 records in 00:13:06.099 256+0 records out 00:13:06.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0610857 s, 17.2 MB/s 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:06.099 256+0 records in 00:13:06.099 256+0 records out 00:13:06.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0565784 s, 18.5 MB/s 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:06.099 256+0 records in 00:13:06.099 256+0 records out 00:13:06.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0574338 s, 18.3 MB/s 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.099 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.357 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.614 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.615 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.615 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.615 17:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.934 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.935 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.209 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:07.468 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:07.726 malloc_lvol_verify 00:13:07.727 17:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:07.727 70976734-ce7a-40e7-990a-26e6a4513af3 00:13:07.985 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:07.985 7e068fac-0314-4632-9dbc-5d2c61c54765 00:13:07.985 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:08.243 /dev/nbd0 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:08.243 mke2fs 1.47.0 (5-Feb-2023) 00:13:08.243 Discarding device blocks: 0/4096 done 00:13:08.243 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:08.243 00:13:08.243 Allocating group tables: 0/1 done 00:13:08.243 Writing inode tables: 0/1 done 00:13:08.243 Creating journal (1024 blocks): done 00:13:08.243 Writing superblocks and filesystem accounting information: 0/1 done 00:13:08.243 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.243 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.502 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69580 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69580 ']' 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69580 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69580 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.503 killing process with pid 69580 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69580' 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69580 00:13:08.503 17:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69580 00:13:09.069 17:54:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:09.069 00:13:09.069 real 0m9.304s 00:13:09.069 user 0m13.344s 00:13:09.069 sys 0m3.149s 00:13:09.069 17:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.069 17:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:09.069 ************************************ 00:13:09.069 END TEST bdev_nbd 00:13:09.069 ************************************ 00:13:09.069 17:54:27 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:09.069 17:54:27 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:09.069 17:54:27 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:09.069 17:54:27 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:09.069 17:54:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.069 17:54:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.069 17:54:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.069 ************************************ 00:13:09.069 START TEST bdev_fio 00:13:09.069 ************************************ 00:13:09.069 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:09.069 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:09.328 ************************************ 00:13:09.328 START TEST bdev_fio_rw_verify 00:13:09.328 ************************************ 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.328 17:54:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.328 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.328 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.328 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.328 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.328 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.328 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.328 fio-3.35 00:13:09.328 Starting 6 threads 00:13:21.523 00:13:21.523 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69971: Fri Oct 25 17:54:38 2024 00:13:21.523 read: IOPS=43.5k, BW=170MiB/s (178MB/s)(1699MiB/10001msec) 00:13:21.523 slat (usec): min=2, max=1532, avg= 4.61, stdev= 3.92 00:13:21.523 clat (usec): min=57, max=315027, avg=376.74, stdev=1364.71 00:13:21.523 lat (usec): min=64, max=315031, avg=381.35, stdev=1364.77 00:13:21.523 clat percentiles (usec): 00:13:21.523 | 50.000th=[ 338], 99.000th=[ 988], 99.900th=[ 1647], 00:13:21.523 | 99.990th=[ 3818], 99.999th=[316670] 00:13:21.523 write: IOPS=43.9k, BW=171MiB/s (180MB/s)(1715MiB/10001msec); 0 zone resets 00:13:21.523 slat (usec): min=4, max=3519, avg=23.17, stdev=33.83 00:13:21.523 clat (usec): min=56, max=6552, avg=504.79, stdev=228.59 00:13:21.523 lat (usec): min=72, max=6566, avg=527.96, stdev=232.57 00:13:21.523 clat percentiles (usec): 00:13:21.523 | 50.000th=[ 474], 99.000th=[ 1172], 99.900th=[ 1696], 99.990th=[ 4555], 00:13:21.523 | 99.999th=[ 6259] 00:13:21.523 bw ( KiB/s): min=143538, max=197227, per=99.96%, avg=175509.53, stdev=2429.82, samples=114 00:13:21.523 iops : min=35884, max=49306, avg=43876.95, stdev=607.47, samples=114 00:13:21.523 lat (usec) : 100=0.17%, 250=19.22%, 500=48.12%, 750=24.13%, 1000=6.50% 00:13:21.523 lat (msec) : 2=1.80%, 4=0.05%, 10=0.01%, 500=0.01% 00:13:21.523 cpu : usr=53.97%, sys=27.53%, ctx=10262, majf=0, minf=34693 00:13:21.523 IO depths : 1=11.4%, 2=23.6%, 4=51.4%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.523 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.523 issued rwts: total=434829,438986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.523 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:21.523 00:13:21.523 Run status group 0 (all jobs): 00:13:21.523 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=1699MiB (1781MB), run=10001-10001msec 00:13:21.523 WRITE: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=1715MiB (1798MB), run=10001-10001msec 00:13:21.523 ----------------------------------------------------- 00:13:21.523 Suppressions used: 00:13:21.523 count bytes template 00:13:21.523 6 48 /usr/src/fio/parse.c 00:13:21.523 3850 369600 /usr/src/fio/iolog.c 00:13:21.523 1 8 libtcmalloc_minimal.so 00:13:21.523 1 904 libcrypto.so 00:13:21.523 ----------------------------------------------------- 00:13:21.523 00:13:21.523 00:13:21.523 real 0m11.663s 00:13:21.523 user 0m33.632s 00:13:21.523 sys 0m16.804s 00:13:21.523 ************************************ 00:13:21.523 END TEST bdev_fio_rw_verify 00:13:21.523 ************************************ 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:21.523 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "752358f5-3812-46a0-86ed-e6942ac3b20c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "752358f5-3812-46a0-86ed-e6942ac3b20c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2e689128-84aa-4268-b335-b74a8426c0d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2e689128-84aa-4268-b335-b74a8426c0d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bc720e05-a127-4937-9faf-565e52a4edbb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc720e05-a127-4937-9faf-565e52a4edbb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "014e220c-a8a4-4576-b0db-89ee116f08da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "014e220c-a8a4-4576-b0db-89ee116f08da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c11d585f-95ad-4005-b6b6-12ad7b567503"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c11d585f-95ad-4005-b6b6-12ad7b567503",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "21ff9e6b-e8cb-4efc-a63e-8d4052bd0909"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21ff9e6b-e8cb-4efc-a63e-8d4052bd0909",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:21.524 /home/vagrant/spdk_repo/spdk 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:21.524 00:13:21.524 real 0m11.813s 00:13:21.524 user 0m33.708s 00:13:21.524 sys 0m16.870s 00:13:21.524 ************************************ 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.524 17:54:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:21.524 END TEST bdev_fio 00:13:21.524 ************************************ 00:13:21.524 17:54:39 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:21.524 17:54:39 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:21.524 17:54:39 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:21.524 17:54:39 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.524 17:54:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.524 ************************************ 00:13:21.524 START TEST bdev_verify 00:13:21.524 ************************************ 00:13:21.524 17:54:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:21.524 [2024-10-25 17:54:39.418270] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:21.524 [2024-10-25 17:54:39.418388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70140 ] 00:13:21.524 [2024-10-25 17:54:39.578344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:21.524 [2024-10-25 17:54:39.682690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.524 [2024-10-25 17:54:39.682851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.784 Running I/O for 5 seconds... 00:13:24.097 21600.00 IOPS, 84.38 MiB/s [2024-10-25T17:54:43.467Z] 22944.00 IOPS, 89.62 MiB/s [2024-10-25T17:54:44.402Z] 23594.67 IOPS, 92.17 MiB/s [2024-10-25T17:54:45.337Z] 23704.00 IOPS, 92.59 MiB/s [2024-10-25T17:54:45.337Z] 23744.00 IOPS, 92.75 MiB/s 00:13:26.902 Latency(us) 00:13:26.902 [2024-10-25T17:54:45.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.902 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0x0 length 0xa0000 00:13:26.902 nvme0n1 : 5.05 1750.20 6.84 0.00 0.00 72986.20 10334.52 69367.34 00:13:26.902 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0xa0000 length 0xa0000 00:13:26.902 nvme0n1 : 5.03 1703.52 6.65 0.00 0.00 74996.94 12653.49 74610.22 00:13:26.902 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0x0 length 0xbd0bd 00:13:26.902 nvme1n1 : 5.06 3079.22 12.03 0.00 0.00 41256.84 4461.49 72997.02 00:13:26.902 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:26.902 nvme1n1 : 5.06 3038.93 11.87 0.00 0.00 41874.46 4360.66 77433.30 00:13:26.902 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0x0 length 0x80000 00:13:26.902 nvme2n1 : 5.05 1774.96 6.93 0.00 0.00 71638.89 8318.03 70980.53 00:13:26.902 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0x80000 length 0x80000 00:13:26.902 nvme2n1 : 5.07 1717.99 6.71 0.00 0.00 74020.47 6906.49 81062.99 00:13:26.902 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.902 Verification LBA range: start 0x0 length 0x80000 00:13:26.902 nvme2n2 : 5.04 1750.78 6.84 0.00 0.00 72430.07 11292.36 70980.53 00:13:26.902 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.903 Verification LBA range: start 0x80000 length 0x80000 00:13:26.903 nvme2n2 : 5.07 1715.86 6.70 0.00 0.00 73902.68 10233.70 64124.46 00:13:26.903 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.903 Verification LBA range: start 0x0 length 0x80000 00:13:26.903 nvme2n3 : 5.07 1767.81 6.91 0.00 0.00 71593.07 4032.98 71787.13 00:13:26.903 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.903 Verification LBA range: start 0x80000 length 0x80000 00:13:26.903 nvme2n3 : 5.07 1715.39 6.70 0.00 0.00 73761.88 7763.50 73400.32 00:13:26.903 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:26.903 Verification LBA range: start 0x0 length 0x20000 00:13:26.903 nvme3n1 : 5.07 1766.79 6.90 0.00 0.00 71487.32 5873.03 75416.81 00:13:26.903 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.903 Verification LBA range: start 0x20000 length 0x20000 00:13:26.903 nvme3n1 : 5.07 1717.39 6.71 0.00 0.00 73515.97 7208.96 75820.11 00:13:26.903 [2024-10-25T17:54:45.338Z] =================================================================================================================== 00:13:26.903 [2024-10-25T17:54:45.338Z] Total : 23498.84 91.79 0.00 0.00 64823.42 4032.98 81062.99 00:13:27.838 00:13:27.838 real 0m6.557s 00:13:27.838 user 0m10.486s 00:13:27.838 sys 0m1.630s 00:13:27.838 17:54:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.838 17:54:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:27.838 ************************************ 00:13:27.838 END TEST bdev_verify 00:13:27.838 ************************************ 00:13:27.838 17:54:45 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:27.838 17:54:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:27.838 17:54:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.838 17:54:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.838 ************************************ 00:13:27.838 START TEST bdev_verify_big_io 00:13:27.838 ************************************ 00:13:27.838 17:54:45 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:27.838 [2024-10-25 17:54:46.019477] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:27.838 [2024-10-25 17:54:46.019605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70239 ] 00:13:27.838 [2024-10-25 17:54:46.180726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.097 [2024-10-25 17:54:46.285224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.097 [2024-10-25 17:54:46.285337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.356 Running I/O for 5 seconds... 00:13:33.948 1280.00 IOPS, 80.00 MiB/s [2024-10-25T17:54:52.645Z] 2549.50 IOPS, 159.34 MiB/s [2024-10-25T17:54:53.589Z] 2373.67 IOPS, 148.35 MiB/s 00:13:35.154 Latency(us) 00:13:35.154 [2024-10-25T17:54:53.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.154 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x0 length 0xa000 00:13:35.154 nvme0n1 : 6.20 113.48 7.09 0.00 0.00 1070836.47 187130.49 1393799.48 00:13:35.154 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0xa000 length 0xa000 00:13:35.154 nvme0n1 : 6.47 118.72 7.42 0.00 0.00 995778.82 66544.25 1129235.69 00:13:35.154 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x0 length 0xbd0b 00:13:35.154 nvme1n1 : 6.47 138.50 8.66 0.00 0.00 857904.51 6351.95 1232480.10 00:13:35.154 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:35.154 nvme1n1 : 6.16 103.90 6.49 0.00 0.00 1129462.35 9124.63 2090699.22 00:13:35.154 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x0 length 0x8000 00:13:35.154 nvme2n1 : 6.21 90.70 5.67 0.00 0.00 1251991.00 120989.54 1806777.11 00:13:35.154 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x8000 length 0x8000 00:13:35.154 nvme2n1 : 5.82 131.99 8.25 0.00 0.00 886302.98 170998.55 935652.43 00:13:35.154 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x0 length 0x8000 00:13:35.154 nvme2n2 : 6.21 100.95 6.31 0.00 0.00 1076023.45 209715.20 1910021.51 00:13:35.154 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x8000 length 0x8000 00:13:35.154 nvme2n2 : 5.82 107.20 6.70 0.00 0.00 1039912.19 178257.92 1535760.54 00:13:35.154 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x0 length 0x8000 00:13:35.154 nvme2n3 : 6.47 93.93 5.87 0.00 0.00 1130255.21 98404.82 2774693.42 00:13:35.154 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x8000 length 0x8000 00:13:35.154 nvme2n3 : 6.37 120.51 7.53 0.00 0.00 873838.01 133895.09 1109877.37 00:13:35.154 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x0 length 0x2000 00:13:35.154 nvme3n1 : 6.60 120.62 7.54 0.00 0.00 838030.26 203.22 1335724.50 00:13:35.154 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.154 Verification LBA range: start 0x2000 length 0x2000 00:13:35.154 nvme3n1 : 6.63 128.85 8.05 0.00 0.00 810282.63 191.41 1116330.14 00:13:35.154 [2024-10-25T17:54:53.589Z] =================================================================================================================== 00:13:35.154 [2024-10-25T17:54:53.589Z] Total : 1369.35 85.58 0.00 0.00 979183.51 191.41 2774693.42 00:13:36.098 00:13:36.098 real 0m8.434s 00:13:36.098 user 0m15.083s 00:13:36.098 sys 0m0.900s 00:13:36.098 17:54:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.098 ************************************ 00:13:36.098 END TEST bdev_verify_big_io 00:13:36.098 ************************************ 00:13:36.098 17:54:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 17:54:54 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.098 17:54:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:36.098 17:54:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.098 17:54:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 ************************************ 00:13:36.098 START TEST bdev_write_zeroes 00:13:36.098 ************************************ 00:13:36.098 17:54:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.360 [2024-10-25 17:54:54.537638] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:36.360 [2024-10-25 17:54:54.537793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70354 ] 00:13:36.360 [2024-10-25 17:54:54.703849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.620 [2024-10-25 17:54:54.838106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.881 Running I/O for 1 seconds... 00:13:38.262 81632.00 IOPS, 318.88 MiB/s 00:13:38.262 Latency(us) 00:13:38.262 [2024-10-25T17:54:56.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.262 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:38.262 nvme0n1 : 1.03 13097.90 51.16 0.00 0.00 9761.29 6452.78 26214.40 00:13:38.262 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:38.262 nvme1n1 : 1.03 14795.39 57.79 0.00 0.00 8634.11 5318.50 21475.64 00:13:38.263 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:38.263 nvme2n1 : 1.03 13067.42 51.04 0.00 0.00 9769.34 6503.19 21878.94 00:13:38.263 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:38.263 nvme2n2 : 1.03 13052.66 50.99 0.00 0.00 9721.60 5469.74 23492.14 00:13:38.263 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:38.263 nvme2n3 : 1.03 13037.63 50.93 0.00 0.00 9723.71 5469.74 24802.86 00:13:38.263 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:38.263 nvme3n1 : 1.03 13022.39 50.87 0.00 0.00 9727.55 5520.15 24802.86 00:13:38.263 [2024-10-25T17:54:56.698Z] =================================================================================================================== 00:13:38.263 [2024-10-25T17:54:56.698Z] Total : 80073.39 312.79 0.00 0.00 9536.57 5318.50 26214.40 00:13:38.895 00:13:38.895 real 0m2.678s 00:13:38.895 user 0m1.958s 00:13:38.895 sys 0m0.511s 00:13:38.895 17:54:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.895 17:54:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:38.895 ************************************ 00:13:38.895 END TEST bdev_write_zeroes 00:13:38.895 ************************************ 00:13:38.895 17:54:57 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.895 17:54:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:38.895 17:54:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.895 17:54:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.895 ************************************ 00:13:38.895 START TEST bdev_json_nonenclosed 00:13:38.895 ************************************ 00:13:38.895 17:54:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.895 [2024-10-25 17:54:57.279376] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:38.895 [2024-10-25 17:54:57.279538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70407 ] 00:13:39.167 [2024-10-25 17:54:57.447613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.167 [2024-10-25 17:54:57.590364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.167 [2024-10-25 17:54:57.590485] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:39.167 [2024-10-25 17:54:57.590506] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:39.167 [2024-10-25 17:54:57.590518] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:39.428 ************************************ 00:13:39.428 END TEST bdev_json_nonenclosed 00:13:39.428 ************************************ 00:13:39.428 00:13:39.428 real 0m0.591s 00:13:39.428 user 0m0.357s 00:13:39.428 sys 0m0.128s 00:13:39.428 17:54:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.428 17:54:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:39.428 17:54:57 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:39.428 17:54:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:39.428 17:54:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.428 17:54:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.690 ************************************ 00:13:39.690 START TEST bdev_json_nonarray 00:13:39.690 ************************************ 00:13:39.690 17:54:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:39.690 [2024-10-25 17:54:57.941181] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:13:39.690 [2024-10-25 17:54:57.941347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70428 ] 00:13:39.690 [2024-10-25 17:54:58.109597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.951 [2024-10-25 17:54:58.212872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.951 [2024-10-25 17:54:58.212973] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:39.951 [2024-10-25 17:54:58.212990] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:39.951 [2024-10-25 17:54:58.212999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:40.211 00:13:40.211 real 0m0.575s 00:13:40.211 user 0m0.372s 00:13:40.211 sys 0m0.097s 00:13:40.211 17:54:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.211 17:54:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:40.211 ************************************ 00:13:40.211 END TEST bdev_json_nonarray 00:13:40.211 ************************************ 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:40.211 17:54:58 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:40.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:12.881 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:39.433 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:39.433 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:39.433 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:39.433 00:14:39.433 real 1m49.707s 00:14:39.433 user 1m32.712s 00:14:39.433 sys 2m42.273s 00:14:39.433 17:55:55 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.433 17:55:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.433 ************************************ 00:14:39.433 END TEST blockdev_xnvme 00:14:39.433 ************************************ 00:14:39.433 17:55:56 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:39.433 17:55:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:39.433 17:55:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.433 17:55:56 -- common/autotest_common.sh@10 -- # set +x 00:14:39.433 ************************************ 00:14:39.433 START TEST ublk 00:14:39.433 ************************************ 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:39.433 * Looking for test storage... 00:14:39.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1689 -- # lcov --version 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.433 17:55:56 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.433 17:55:56 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.433 17:55:56 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.433 17:55:56 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.433 17:55:56 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.433 17:55:56 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:39.433 17:55:56 ublk -- scripts/common.sh@345 -- # : 1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.433 17:55:56 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.433 17:55:56 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@353 -- # local d=1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.433 17:55:56 ublk -- scripts/common.sh@355 -- # echo 1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.433 17:55:56 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@353 -- # local d=2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.433 17:55:56 ublk -- scripts/common.sh@355 -- # echo 2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.433 17:55:56 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.433 17:55:56 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.433 17:55:56 ublk -- scripts/common.sh@368 -- # return 0 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.433 17:55:56 ublk -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:39.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.433 --rc genhtml_branch_coverage=1 00:14:39.433 --rc genhtml_function_coverage=1 00:14:39.433 --rc genhtml_legend=1 00:14:39.434 --rc geninfo_all_blocks=1 00:14:39.434 --rc geninfo_unexecuted_blocks=1 00:14:39.434 00:14:39.434 ' 00:14:39.434 17:55:56 ublk -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:39.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.434 --rc genhtml_branch_coverage=1 00:14:39.434 --rc genhtml_function_coverage=1 00:14:39.434 --rc genhtml_legend=1 00:14:39.434 --rc geninfo_all_blocks=1 00:14:39.434 --rc geninfo_unexecuted_blocks=1 00:14:39.434 00:14:39.434 ' 00:14:39.434 17:55:56 ublk -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:39.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.434 --rc genhtml_branch_coverage=1 00:14:39.434 --rc genhtml_function_coverage=1 00:14:39.434 --rc genhtml_legend=1 00:14:39.434 --rc geninfo_all_blocks=1 00:14:39.434 --rc geninfo_unexecuted_blocks=1 00:14:39.434 00:14:39.434 ' 00:14:39.434 17:55:56 ublk -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:39.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.434 --rc genhtml_branch_coverage=1 00:14:39.434 --rc genhtml_function_coverage=1 00:14:39.434 --rc genhtml_legend=1 00:14:39.434 --rc geninfo_all_blocks=1 00:14:39.434 --rc geninfo_unexecuted_blocks=1 00:14:39.434 00:14:39.434 ' 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:39.434 17:55:56 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:39.434 17:55:56 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:39.434 17:55:56 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:39.434 17:55:56 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:39.434 17:55:56 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:39.434 17:55:56 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:39.434 17:55:56 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:39.434 17:55:56 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:39.434 17:55:56 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:39.434 17:55:56 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:39.434 17:55:56 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.434 17:55:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 ************************************ 00:14:39.434 START TEST test_save_ublk_config 00:14:39.434 ************************************ 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70738 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70738 00:14:39.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70738 ']' 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.434 17:55:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 [2024-10-25 17:55:56.260882] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:39.434 [2024-10-25 17:55:56.261129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:14:39.434 [2024-10-25 17:55:56.422846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.434 [2024-10-25 17:55:56.520900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 [2024-10-25 17:55:57.161578] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:39.434 [2024-10-25 17:55:57.162360] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:39.434 malloc0 00:14:39.434 [2024-10-25 17:55:57.225688] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:39.434 [2024-10-25 17:55:57.225761] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:39.434 [2024-10-25 17:55:57.225770] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:39.434 [2024-10-25 17:55:57.225777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:39.434 [2024-10-25 17:55:57.234676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:39.434 [2024-10-25 17:55:57.234710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:39.434 [2024-10-25 17:55:57.241618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:39.434 [2024-10-25 17:55:57.241756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:39.434 [2024-10-25 17:55:57.258600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:39.434 0 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.434 17:55:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:39.434 "subsystems": [ 00:14:39.434 { 00:14:39.434 "subsystem": "fsdev", 00:14:39.434 "config": [ 00:14:39.434 { 00:14:39.434 "method": "fsdev_set_opts", 00:14:39.434 "params": { 00:14:39.434 "fsdev_io_pool_size": 65535, 00:14:39.434 "fsdev_io_cache_size": 256 00:14:39.434 } 00:14:39.434 } 00:14:39.434 ] 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "subsystem": "keyring", 00:14:39.434 "config": [] 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "subsystem": "iobuf", 00:14:39.434 "config": [ 00:14:39.434 { 00:14:39.434 "method": "iobuf_set_options", 00:14:39.434 "params": { 00:14:39.434 "small_pool_count": 8192, 00:14:39.434 "large_pool_count": 1024, 00:14:39.434 "small_bufsize": 8192, 00:14:39.434 "large_bufsize": 135168, 00:14:39.434 "enable_numa": false 00:14:39.434 } 00:14:39.434 } 00:14:39.434 ] 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "subsystem": "sock", 00:14:39.434 "config": [ 00:14:39.434 { 00:14:39.434 "method": "sock_set_default_impl", 00:14:39.434 "params": { 00:14:39.434 "impl_name": "posix" 00:14:39.434 } 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "method": "sock_impl_set_options", 00:14:39.434 "params": { 00:14:39.434 "impl_name": "ssl", 00:14:39.434 "recv_buf_size": 4096, 00:14:39.434 "send_buf_size": 4096, 00:14:39.434 "enable_recv_pipe": true, 00:14:39.434 "enable_quickack": false, 00:14:39.434 "enable_placement_id": 0, 00:14:39.434 "enable_zerocopy_send_server": true, 00:14:39.434 "enable_zerocopy_send_client": false, 00:14:39.434 "zerocopy_threshold": 0, 00:14:39.434 "tls_version": 0, 00:14:39.434 "enable_ktls": false 00:14:39.434 } 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "method": "sock_impl_set_options", 00:14:39.434 "params": { 00:14:39.434 "impl_name": "posix", 00:14:39.434 "recv_buf_size": 2097152, 00:14:39.434 "send_buf_size": 2097152, 00:14:39.434 "enable_recv_pipe": true, 00:14:39.434 "enable_quickack": false, 00:14:39.434 "enable_placement_id": 0, 00:14:39.434 "enable_zerocopy_send_server": true, 00:14:39.434 "enable_zerocopy_send_client": false, 00:14:39.434 "zerocopy_threshold": 0, 00:14:39.434 "tls_version": 0, 00:14:39.434 "enable_ktls": false 00:14:39.434 } 00:14:39.434 } 00:14:39.434 ] 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "subsystem": "vmd", 00:14:39.434 "config": [] 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "subsystem": "accel", 00:14:39.434 "config": [ 00:14:39.434 { 00:14:39.434 "method": "accel_set_options", 00:14:39.434 "params": { 00:14:39.434 "small_cache_size": 128, 00:14:39.434 "large_cache_size": 16, 00:14:39.434 "task_count": 2048, 00:14:39.434 "sequence_count": 2048, 00:14:39.434 "buf_count": 2048 00:14:39.434 } 00:14:39.434 } 00:14:39.434 ] 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "subsystem": "bdev", 00:14:39.434 "config": [ 00:14:39.434 { 00:14:39.434 "method": "bdev_set_options", 00:14:39.434 "params": { 00:14:39.434 "bdev_io_pool_size": 65535, 00:14:39.434 "bdev_io_cache_size": 256, 00:14:39.434 "bdev_auto_examine": true, 00:14:39.434 "iobuf_small_cache_size": 128, 00:14:39.434 "iobuf_large_cache_size": 16 00:14:39.434 } 00:14:39.434 }, 00:14:39.434 { 00:14:39.434 "method": "bdev_raid_set_options", 00:14:39.434 "params": { 00:14:39.434 "process_window_size_kb": 1024, 00:14:39.434 "process_max_bandwidth_mb_sec": 0 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "bdev_iscsi_set_options", 00:14:39.435 "params": { 00:14:39.435 "timeout_sec": 30 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "bdev_nvme_set_options", 00:14:39.435 "params": { 00:14:39.435 "action_on_timeout": "none", 00:14:39.435 "timeout_us": 0, 00:14:39.435 "timeout_admin_us": 0, 00:14:39.435 "keep_alive_timeout_ms": 10000, 00:14:39.435 "arbitration_burst": 0, 00:14:39.435 "low_priority_weight": 0, 00:14:39.435 "medium_priority_weight": 0, 00:14:39.435 "high_priority_weight": 0, 00:14:39.435 "nvme_adminq_poll_period_us": 10000, 00:14:39.435 "nvme_ioq_poll_period_us": 0, 00:14:39.435 "io_queue_requests": 0, 00:14:39.435 "delay_cmd_submit": true, 00:14:39.435 "transport_retry_count": 4, 00:14:39.435 "bdev_retry_count": 3, 00:14:39.435 "transport_ack_timeout": 0, 00:14:39.435 "ctrlr_loss_timeout_sec": 0, 00:14:39.435 "reconnect_delay_sec": 0, 00:14:39.435 "fast_io_fail_timeout_sec": 0, 00:14:39.435 "disable_auto_failback": false, 00:14:39.435 "generate_uuids": false, 00:14:39.435 "transport_tos": 0, 00:14:39.435 "nvme_error_stat": false, 00:14:39.435 "rdma_srq_size": 0, 00:14:39.435 "io_path_stat": false, 00:14:39.435 "allow_accel_sequence": false, 00:14:39.435 "rdma_max_cq_size": 0, 00:14:39.435 "rdma_cm_event_timeout_ms": 0, 00:14:39.435 "dhchap_digests": [ 00:14:39.435 "sha256", 00:14:39.435 "sha384", 00:14:39.435 "sha512" 00:14:39.435 ], 00:14:39.435 "dhchap_dhgroups": [ 00:14:39.435 "null", 00:14:39.435 "ffdhe2048", 00:14:39.435 "ffdhe3072", 00:14:39.435 "ffdhe4096", 00:14:39.435 "ffdhe6144", 00:14:39.435 "ffdhe8192" 00:14:39.435 ] 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "bdev_nvme_set_hotplug", 00:14:39.435 "params": { 00:14:39.435 "period_us": 100000, 00:14:39.435 "enable": false 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "bdev_malloc_create", 00:14:39.435 "params": { 00:14:39.435 "name": "malloc0", 00:14:39.435 "num_blocks": 8192, 00:14:39.435 "block_size": 4096, 00:14:39.435 "physical_block_size": 4096, 00:14:39.435 "uuid": "c60cd84e-767a-4031-9cba-bcc0d1c3758e", 00:14:39.435 "optimal_io_boundary": 0, 00:14:39.435 "md_size": 0, 00:14:39.435 "dif_type": 0, 00:14:39.435 "dif_is_head_of_md": false, 00:14:39.435 "dif_pi_format": 0 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "bdev_wait_for_examine" 00:14:39.435 } 00:14:39.435 ] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "scsi", 00:14:39.435 "config": null 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "scheduler", 00:14:39.435 "config": [ 00:14:39.435 { 00:14:39.435 "method": "framework_set_scheduler", 00:14:39.435 "params": { 00:14:39.435 "name": "static" 00:14:39.435 } 00:14:39.435 } 00:14:39.435 ] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "vhost_scsi", 00:14:39.435 "config": [] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "vhost_blk", 00:14:39.435 "config": [] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "ublk", 00:14:39.435 "config": [ 00:14:39.435 { 00:14:39.435 "method": "ublk_create_target", 00:14:39.435 "params": { 00:14:39.435 "cpumask": "1" 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "ublk_start_disk", 00:14:39.435 "params": { 00:14:39.435 "bdev_name": "malloc0", 00:14:39.435 "ublk_id": 0, 00:14:39.435 "num_queues": 1, 00:14:39.435 "queue_depth": 128 00:14:39.435 } 00:14:39.435 } 00:14:39.435 ] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "nbd", 00:14:39.435 "config": [] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "nvmf", 00:14:39.435 "config": [ 00:14:39.435 { 00:14:39.435 "method": "nvmf_set_config", 00:14:39.435 "params": { 00:14:39.435 "discovery_filter": "match_any", 00:14:39.435 "admin_cmd_passthru": { 00:14:39.435 "identify_ctrlr": false 00:14:39.435 }, 00:14:39.435 "dhchap_digests": [ 00:14:39.435 "sha256", 00:14:39.435 "sha384", 00:14:39.435 "sha512" 00:14:39.435 ], 00:14:39.435 "dhchap_dhgroups": [ 00:14:39.435 "null", 00:14:39.435 "ffdhe2048", 00:14:39.435 "ffdhe3072", 00:14:39.435 "ffdhe4096", 00:14:39.435 "ffdhe6144", 00:14:39.435 "ffdhe8192" 00:14:39.435 ] 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "nvmf_set_max_subsystems", 00:14:39.435 "params": { 00:14:39.435 "max_subsystems": 1024 00:14:39.435 } 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "method": "nvmf_set_crdt", 00:14:39.435 "params": { 00:14:39.435 "crdt1": 0, 00:14:39.435 "crdt2": 0, 00:14:39.435 "crdt3": 0 00:14:39.435 } 00:14:39.435 } 00:14:39.435 ] 00:14:39.435 }, 00:14:39.435 { 00:14:39.435 "subsystem": "iscsi", 00:14:39.435 "config": [ 00:14:39.435 { 00:14:39.435 "method": "iscsi_set_options", 00:14:39.435 "params": { 00:14:39.435 "node_base": "iqn.2016-06.io.spdk", 00:14:39.435 "max_sessions": 128, 00:14:39.435 "max_connections_per_session": 2, 00:14:39.435 "max_queue_depth": 64, 00:14:39.435 "default_time2wait": 2, 00:14:39.435 "default_time2retain": 20, 00:14:39.435 "first_burst_length": 8192, 00:14:39.435 "immediate_data": true, 00:14:39.435 "allow_duplicated_isid": false, 00:14:39.435 "error_recovery_level": 0, 00:14:39.435 "nop_timeout": 60, 00:14:39.435 "nop_in_interval": 30, 00:14:39.435 "disable_chap": false, 00:14:39.435 "require_chap": false, 00:14:39.435 "mutual_chap": false, 00:14:39.435 "chap_group": 0, 00:14:39.435 "max_large_datain_per_connection": 64, 00:14:39.435 "max_r2t_per_connection": 4, 00:14:39.435 "pdu_pool_size": 36864, 00:14:39.435 "immediate_data_pool_size": 16384, 00:14:39.435 "data_out_pool_size": 2048 00:14:39.435 } 00:14:39.435 } 00:14:39.435 ] 00:14:39.435 } 00:14:39.435 ] 00:14:39.435 }' 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70738 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70738 ']' 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70738 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70738 00:14:39.435 killing process with pid 70738 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70738' 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70738 00:14:39.435 17:55:57 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70738 00:14:40.376 [2024-10-25 17:55:58.643824] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.376 [2024-10-25 17:55:58.681670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.376 [2024-10-25 17:55:58.681786] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.376 [2024-10-25 17:55:58.689585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.376 [2024-10-25 17:55:58.689633] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:40.376 [2024-10-25 17:55:58.689645] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:40.376 [2024-10-25 17:55:58.689669] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:40.376 [2024-10-25 17:55:58.689806] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:41.769 17:55:59 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70795 00:14:41.769 17:55:59 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70795 00:14:41.769 17:55:59 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70795 ']' 00:14:41.769 17:55:59 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:41.769 "subsystems": [ 00:14:41.769 { 00:14:41.769 "subsystem": "fsdev", 00:14:41.769 "config": [ 00:14:41.769 { 00:14:41.769 "method": "fsdev_set_opts", 00:14:41.769 "params": { 00:14:41.769 "fsdev_io_pool_size": 65535, 00:14:41.769 "fsdev_io_cache_size": 256 00:14:41.769 } 00:14:41.769 } 00:14:41.769 ] 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "subsystem": "keyring", 00:14:41.769 "config": [] 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "subsystem": "iobuf", 00:14:41.769 "config": [ 00:14:41.769 { 00:14:41.769 "method": "iobuf_set_options", 00:14:41.769 "params": { 00:14:41.769 "small_pool_count": 8192, 00:14:41.769 "large_pool_count": 1024, 00:14:41.769 "small_bufsize": 8192, 00:14:41.769 "large_bufsize": 135168, 00:14:41.769 "enable_numa": false 00:14:41.769 } 00:14:41.769 } 00:14:41.769 ] 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "subsystem": "sock", 00:14:41.769 "config": [ 00:14:41.769 { 00:14:41.769 "method": "sock_set_default_impl", 00:14:41.769 "params": { 00:14:41.769 "impl_name": "posix" 00:14:41.769 } 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "method": "sock_impl_set_options", 00:14:41.769 "params": { 00:14:41.769 "impl_name": "ssl", 00:14:41.769 "recv_buf_size": 4096, 00:14:41.769 "send_buf_size": 4096, 00:14:41.769 "enable_recv_pipe": true, 00:14:41.769 "enable_quickack": false, 00:14:41.769 "enable_placement_id": 0, 00:14:41.769 "enable_zerocopy_send_server": true, 00:14:41.769 "enable_zerocopy_send_client": false, 00:14:41.769 "zerocopy_threshold": 0, 00:14:41.769 "tls_version": 0, 00:14:41.769 "enable_ktls": false 00:14:41.769 } 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "method": "sock_impl_set_options", 00:14:41.769 "params": { 00:14:41.769 "impl_name": "posix", 00:14:41.769 "recv_buf_size": 2097152, 00:14:41.769 "send_buf_size": 2097152, 00:14:41.769 "enable_recv_pipe": true, 00:14:41.769 "enable_quickack": false, 00:14:41.769 "enable_placement_id": 0, 00:14:41.769 "enable_zerocopy_send_server": true, 00:14:41.769 "enable_zerocopy_send_client": false, 00:14:41.769 "zerocopy_threshold": 0, 00:14:41.769 "tls_version": 0, 00:14:41.769 "enable_ktls": false 00:14:41.769 } 00:14:41.769 } 00:14:41.769 ] 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "subsystem": "vmd", 00:14:41.769 "config": [] 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "subsystem": "accel", 00:14:41.769 "config": [ 00:14:41.769 { 00:14:41.769 "method": "accel_set_options", 00:14:41.769 "params": { 00:14:41.769 "small_cache_size": 128, 00:14:41.769 "large_cache_size": 16, 00:14:41.769 "task_count": 2048, 00:14:41.769 "sequence_count": 2048, 00:14:41.769 "buf_count": 2048 00:14:41.769 } 00:14:41.769 } 00:14:41.769 ] 00:14:41.769 }, 00:14:41.769 { 00:14:41.769 "subsystem": "bdev", 00:14:41.769 "config": [ 00:14:41.769 { 00:14:41.769 "method": "bdev_set_options", 00:14:41.769 "params": { 00:14:41.770 "bdev_io_pool_size": 65535, 00:14:41.770 "bdev_io_cache_size": 256, 00:14:41.770 "bdev_auto_examine": true, 00:14:41.770 "iobuf_small_cache_size": 128, 00:14:41.770 "iobuf_large_cache_size": 16 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "bdev_raid_set_options", 00:14:41.770 "params": { 00:14:41.770 "process_window_size_kb": 1024, 00:14:41.770 "process_max_bandwidth_mb_sec": 0 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "bdev_iscsi_set_options", 00:14:41.770 "params": { 00:14:41.770 "timeout_sec": 30 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "bdev_nvme_set_options", 00:14:41.770 "params": { 00:14:41.770 "action_on_timeout": "none", 00:14:41.770 "timeout_us": 0, 00:14:41.770 "timeout_admin_us": 0, 00:14:41.770 "keep_alive_timeout_ms": 10000, 00:14:41.770 "arbitration_burst": 0, 00:14:41.770 "low_priority_weight": 0, 00:14:41.770 "medium_priority_weight": 0, 00:14:41.770 "high_priority_weight": 0, 00:14:41.770 "nvme_adminq_poll_period_us": 10000, 00:14:41.770 "nvme_ioq_poll_period_us": 0, 00:14:41.770 "io_queue_requests": 0, 00:14:41.770 "delay_cmd_submit": true, 00:14:41.770 "transport_retry_count": 4, 00:14:41.770 "bdev_retry_count": 3, 00:14:41.770 "transport_ack_timeout": 0, 00:14:41.770 "ctrlr_loss_timeout_sec": 0, 00:14:41.770 "reconnect_delay_sec": 0, 00:14:41.770 "fast_io_fail_timeout_sec": 0, 00:14:41.770 "disable_auto_failback": false, 00:14:41.770 "generate_uuids": false, 00:14:41.770 "transport_tos": 0, 00:14:41.770 "nvme_error_stat": false, 00:14:41.770 "rdma_srq_size": 0, 00:14:41.770 "io_path_stat": false, 00:14:41.770 "allow_accel_sequence": false, 00:14:41.770 "rdma_max_cq_size": 0, 00:14:41.770 "rdma_cm_event_timeout_ms": 0, 00:14:41.770 "dhchap_digests": [ 00:14:41.770 "sha256", 00:14:41.770 "sha384", 00:14:41.770 "sha512" 00:14:41.770 ], 00:14:41.770 "dhchap_dhgroups": [ 00:14:41.770 "null", 00:14:41.770 "ffdhe2048", 00:14:41.770 "ffdhe3072", 00:14:41.770 "ffdhe4096", 00:14:41.770 "ffdhe6144", 00:14:41.770 "ffdhe8192" 00:14:41.770 ] 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "bdev_nvme_set_hotplug", 00:14:41.770 "params": { 00:14:41.770 "period_us": 100000, 00:14:41.770 "enable": false 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "bdev_malloc_create", 00:14:41.770 "params": { 00:14:41.770 "name": "malloc0", 00:14:41.770 "num_blocks": 8192, 00:14:41.770 "block_size": 4096, 00:14:41.770 "physical_block_size": 4096, 00:14:41.770 "uuid": "c60cd84e-767a-4031-9cba-bcc0d1c3758e", 00:14:41.770 "optimal_io_boundary": 0, 00:14:41.770 "md_size": 0, 00:14:41.770 "dif_type": 0, 00:14:41.770 "dif_is_head_of_md": false, 00:14:41.770 "dif_pi_format": 0 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "bdev_wait_for_examine" 00:14:41.770 } 00:14:41.770 ] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "scsi", 00:14:41.770 "config": null 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "scheduler", 00:14:41.770 "config": [ 00:14:41.770 { 00:14:41.770 "method": "framework_set_scheduler", 00:14:41.770 "params": { 00:14:41.770 "name": "static" 00:14:41.770 } 00:14:41.770 } 00:14:41.770 ] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "vhost_scsi", 00:14:41.770 "config": [] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "vhost_blk", 00:14:41.770 "config": [] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "ublk", 00:14:41.770 "config": [ 00:14:41.770 { 00:14:41.770 "method": "ublk_create_target", 00:14:41.770 "params": { 00:14:41.770 "cpumask": "1" 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "ublk_start_disk", 00:14:41.770 "params": { 00:14:41.770 "bdev_name": "malloc0", 00:14:41.770 "ublk_id": 0, 00:14:41.770 "num_queues": 1, 00:14:41.770 "queue_depth": 128 00:14:41.770 } 00:14:41.770 } 00:14:41.770 ] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "nbd", 00:14:41.770 "config": [] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "nvmf", 00:14:41.770 "config": [ 00:14:41.770 { 00:14:41.770 "method": "nvmf_set_config", 00:14:41.770 "params": { 00:14:41.770 "discovery_filter": "match_any", 00:14:41.770 "admin_cmd_passthru": { 00:14:41.770 "identify_ctrlr": false 00:14:41.770 }, 00:14:41.770 "dhchap_digests": [ 00:14:41.770 "sha256", 00:14:41.770 "sha384", 00:14:41.770 "sha512" 00:14:41.770 ], 00:14:41.770 "dhchap_dhgroups": [ 00:14:41.770 "null", 00:14:41.770 "ffdhe2048", 00:14:41.770 "ffdhe3072", 00:14:41.770 "ffdhe4096", 00:14:41.770 "ffdhe6144", 00:14:41.770 "ffdhe8192" 00:14:41.770 ] 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "nvmf_set_max_subsystems", 00:14:41.770 "params": { 00:14:41.770 "max_subsystems": 1024 00:14:41.770 } 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "method": "nvmf_set_crdt", 00:14:41.770 "params": { 00:14:41.770 "crdt1": 0, 00:14:41.770 "crdt2": 0, 00:14:41.770 "crdt3": 0 00:14:41.770 } 00:14:41.770 } 00:14:41.770 ] 00:14:41.770 }, 00:14:41.770 { 00:14:41.770 "subsystem": "iscsi", 00:14:41.770 "config": [ 00:14:41.770 { 00:14:41.770 "method": "iscsi_set_options", 00:14:41.770 "params": { 00:14:41.770 "node_base": "iqn.2016-06.io.spdk", 00:14:41.770 "max_sessions": 128, 00:14:41.770 "max_connections_per_session": 2, 00:14:41.770 "max_queue_depth": 64, 00:14:41.770 "default_time2wait": 2, 00:14:41.770 "default_time2retain": 20, 00:14:41.770 "first_burst_length": 8192, 00:14:41.770 "immediate_data": true, 00:14:41.770 "allow_duplicated_isid": false, 00:14:41.770 "error_recovery_level": 0, 00:14:41.770 "nop_timeout": 60, 00:14:41.770 "nop_in_interval": 30, 00:14:41.770 "disable_chap": false, 00:14:41.770 "require_chap": false, 00:14:41.770 "mutual_chap": false, 00:14:41.770 "chap_group": 0, 00:14:41.770 "max_large_datain_per_connection": 64, 00:14:41.770 "max_r2t_per_connection": 4, 00:14:41.770 "pdu_pool_size": 36864, 00:14:41.770 "immediate_data_pool_size": 16384, 00:14:41.770 "data_out_pool_size": 2048 00:14:41.770 } 00:14:41.770 } 00:14:41.770 ] 00:14:41.770 } 00:14:41.770 ] 00:14:41.770 }' 00:14:41.770 17:55:59 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:41.770 17:55:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.770 17:55:59 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.770 17:55:59 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.770 17:55:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.770 17:55:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:41.770 [2024-10-25 17:56:00.062638] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:41.770 [2024-10-25 17:56:00.063784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70795 ] 00:14:42.077 [2024-10-25 17:56:00.221338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.077 [2024-10-25 17:56:00.321007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.649 [2024-10-25 17:56:01.080575] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:42.649 [2024-10-25 17:56:01.081378] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:42.910 [2024-10-25 17:56:01.088692] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:42.910 [2024-10-25 17:56:01.088763] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:42.910 [2024-10-25 17:56:01.088772] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:42.910 [2024-10-25 17:56:01.088779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:42.910 [2024-10-25 17:56:01.097635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:42.910 [2024-10-25 17:56:01.097653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:42.910 [2024-10-25 17:56:01.104588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:42.910 [2024-10-25 17:56:01.104672] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:42.910 [2024-10-25 17:56:01.121581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70795 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70795 ']' 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70795 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70795 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.910 killing process with pid 70795 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70795' 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70795 00:14:42.910 17:56:01 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70795 00:14:44.288 [2024-10-25 17:56:02.370003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:44.288 [2024-10-25 17:56:02.399665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:44.288 [2024-10-25 17:56:02.399798] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:44.288 [2024-10-25 17:56:02.408585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:44.288 [2024-10-25 17:56:02.408640] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:44.288 [2024-10-25 17:56:02.408648] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:44.288 [2024-10-25 17:56:02.408674] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:44.288 [2024-10-25 17:56:02.408816] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:45.222 17:56:03 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:45.222 00:14:45.222 real 0m7.435s 00:14:45.222 user 0m5.247s 00:14:45.222 sys 0m2.771s 00:14:45.222 17:56:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.222 17:56:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:45.222 ************************************ 00:14:45.222 END TEST test_save_ublk_config 00:14:45.222 ************************************ 00:14:45.222 17:56:03 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70868 00:14:45.222 17:56:03 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.222 17:56:03 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70868 00:14:45.222 17:56:03 ublk -- common/autotest_common.sh@831 -- # '[' -z 70868 ']' 00:14:45.222 17:56:03 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:45.222 17:56:03 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.222 17:56:03 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.222 17:56:03 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.222 17:56:03 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.222 17:56:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.480 [2024-10-25 17:56:03.716222] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:14:45.481 [2024-10-25 17:56:03.716317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:14:45.481 [2024-10-25 17:56:03.864287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.738 [2024-10-25 17:56:03.947682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.738 [2024-10-25 17:56:03.947799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.302 17:56:04 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.302 17:56:04 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:46.302 17:56:04 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:46.302 17:56:04 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:46.302 17:56:04 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.302 17:56:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.302 ************************************ 00:14:46.302 START TEST test_create_ublk 00:14:46.302 ************************************ 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:46.302 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.302 [2024-10-25 17:56:04.569577] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:46.302 [2024-10-25 17:56:04.571148] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.302 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:46.302 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.302 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:46.302 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.302 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.302 [2024-10-25 17:56:04.729685] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:46.302 [2024-10-25 17:56:04.729985] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:46.302 [2024-10-25 17:56:04.729998] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:46.302 [2024-10-25 17:56:04.730004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:46.560 [2024-10-25 17:56:04.738749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:46.560 [2024-10-25 17:56:04.738766] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:46.560 [2024-10-25 17:56:04.745578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:46.560 [2024-10-25 17:56:04.755622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:46.560 [2024-10-25 17:56:04.779593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:46.560 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:46.560 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.560 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.560 17:56:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:46.560 { 00:14:46.560 "ublk_device": "/dev/ublkb0", 00:14:46.560 "id": 0, 00:14:46.560 "queue_depth": 512, 00:14:46.560 "num_queues": 4, 00:14:46.560 "bdev_name": "Malloc0" 00:14:46.560 } 00:14:46.560 ]' 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:46.560 17:56:04 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:46.819 fio: verification read phase will never start because write phase uses all of runtime 00:14:46.819 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:46.819 fio-3.35 00:14:46.819 Starting 1 process 00:14:56.848 00:14:56.848 fio_test: (groupid=0, jobs=1): err= 0: pid=70912: Fri Oct 25 17:56:15 2024 00:14:56.848 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(543MiB/10001msec); 0 zone resets 00:14:56.848 clat (usec): min=44, max=3973, avg=71.08, stdev=98.36 00:14:56.848 lat (usec): min=44, max=3974, avg=71.56, stdev=98.39 00:14:56.848 clat percentiles (usec): 00:14:56.848 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 61], 00:14:56.848 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 68], 00:14:56.848 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 84], 00:14:56.848 | 99.00th=[ 98], 99.50th=[ 117], 99.90th=[ 2073], 99.95th=[ 2835], 00:14:56.848 | 99.99th=[ 3556] 00:14:56.848 bw ( KiB/s): min=50408, max=59144, per=99.93%, avg=55568.00, stdev=3026.40, samples=19 00:14:56.848 iops : min=12602, max=14786, avg=13892.00, stdev=756.60, samples=19 00:14:56.848 lat (usec) : 50=0.60%, 100=98.51%, 250=0.64%, 500=0.06%, 750=0.01% 00:14:56.848 lat (usec) : 1000=0.01% 00:14:56.848 lat (msec) : 2=0.06%, 4=0.11% 00:14:56.848 cpu : usr=2.21%, sys=13.95%, ctx=139032, majf=0, minf=798 00:14:56.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:56.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.848 issued rwts: total=0,139038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:56.848 00:14:56.848 Run status group 0 (all jobs): 00:14:56.848 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=543MiB (569MB), run=10001-10001msec 00:14:56.848 00:14:56.848 Disk stats (read/write): 00:14:56.848 ublkb0: ios=0/137498, merge=0/0, ticks=0/8170, in_queue=8171, util=99.08% 00:14:56.848 17:56:15 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:56.848 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.848 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.848 [2024-10-25 17:56:15.208442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:56.848 [2024-10-25 17:56:15.253611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:56.848 [2024-10-25 17:56:15.254312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:56.848 [2024-10-25 17:56:15.265610] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:56.848 [2024-10-25 17:56:15.265856] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:56.848 [2024-10-25 17:56:15.265868] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:56.848 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.848 17:56:15 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:56.848 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:56.848 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.849 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.849 [2024-10-25 17:56:15.280641] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:57.106 request: 00:14:57.106 { 00:14:57.106 "ublk_id": 0, 00:14:57.106 "method": "ublk_stop_disk", 00:14:57.106 "req_id": 1 00:14:57.106 } 00:14:57.106 Got JSON-RPC error response 00:14:57.106 response: 00:14:57.106 { 00:14:57.106 "code": -19, 00:14:57.106 "message": "No such device" 00:14:57.106 } 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.106 17:56:15 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.106 [2024-10-25 17:56:15.296648] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:57.106 [2024-10-25 17:56:15.305110] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:57.106 [2024-10-25 17:56:15.305145] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.106 17:56:15 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.106 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.364 17:56:15 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:57.364 17:56:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:57.364 00:14:57.364 real 0m11.196s 00:14:57.364 user 0m0.539s 00:14:57.364 sys 0m1.466s 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.364 17:56:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.364 ************************************ 00:14:57.364 END TEST test_create_ublk 00:14:57.364 ************************************ 00:14:57.364 17:56:15 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:57.364 17:56:15 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:57.364 17:56:15 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.364 17:56:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.364 ************************************ 00:14:57.364 START TEST test_create_multi_ublk 00:14:57.364 ************************************ 00:14:57.364 17:56:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:57.364 17:56:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:57.364 17:56:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.364 17:56:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.622 [2024-10-25 17:56:15.807565] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:57.622 [2024-10-25 17:56:15.809151] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.623 17:56:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.623 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.623 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:57.623 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:57.623 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.623 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.623 [2024-10-25 17:56:16.035688] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:57.623 [2024-10-25 17:56:16.035983] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:57.623 [2024-10-25 17:56:16.035995] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:57.623 [2024-10-25 17:56:16.036003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:57.623 [2024-10-25 17:56:16.047612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:57.623 [2024-10-25 17:56:16.047631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:57.881 [2024-10-25 17:56:16.059575] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:57.881 [2024-10-25 17:56:16.060085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:57.881 [2024-10-25 17:56:16.068921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.881 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.881 [2024-10-25 17:56:16.306685] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:57.881 [2024-10-25 17:56:16.306984] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:57.881 [2024-10-25 17:56:16.306997] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:57.881 [2024-10-25 17:56:16.307003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:58.140 [2024-10-25 17:56:16.318590] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:58.140 [2024-10-25 17:56:16.318607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:58.140 [2024-10-25 17:56:16.330589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:58.140 [2024-10-25 17:56:16.331086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:58.140 [2024-10-25 17:56:16.370585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:58.140 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.140 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:58.140 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.140 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:58.140 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.140 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.399 [2024-10-25 17:56:16.610674] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:58.399 [2024-10-25 17:56:16.610978] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:58.399 [2024-10-25 17:56:16.610991] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:58.399 [2024-10-25 17:56:16.610997] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:58.399 [2024-10-25 17:56:16.622586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:58.399 [2024-10-25 17:56:16.622607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:58.399 [2024-10-25 17:56:16.634580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:58.399 [2024-10-25 17:56:16.635075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:58.399 [2024-10-25 17:56:16.638503] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.399 [2024-10-25 17:56:16.794685] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:58.399 [2024-10-25 17:56:16.794977] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:58.399 [2024-10-25 17:56:16.794990] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:58.399 [2024-10-25 17:56:16.794995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:58.399 [2024-10-25 17:56:16.802594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:58.399 [2024-10-25 17:56:16.802611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:58.399 [2024-10-25 17:56:16.810577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:58.399 [2024-10-25 17:56:16.811064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:58.399 [2024-10-25 17:56:16.818625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.399 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:58.658 { 00:14:58.658 "ublk_device": "/dev/ublkb0", 00:14:58.658 "id": 0, 00:14:58.658 "queue_depth": 512, 00:14:58.658 "num_queues": 4, 00:14:58.658 "bdev_name": "Malloc0" 00:14:58.658 }, 00:14:58.658 { 00:14:58.658 "ublk_device": "/dev/ublkb1", 00:14:58.658 "id": 1, 00:14:58.658 "queue_depth": 512, 00:14:58.658 "num_queues": 4, 00:14:58.658 "bdev_name": "Malloc1" 00:14:58.658 }, 00:14:58.658 { 00:14:58.658 "ublk_device": "/dev/ublkb2", 00:14:58.658 "id": 2, 00:14:58.658 "queue_depth": 512, 00:14:58.658 "num_queues": 4, 00:14:58.658 "bdev_name": "Malloc2" 00:14:58.658 }, 00:14:58.658 { 00:14:58.658 "ublk_device": "/dev/ublkb3", 00:14:58.658 "id": 3, 00:14:58.658 "queue_depth": 512, 00:14:58.658 "num_queues": 4, 00:14:58.658 "bdev_name": "Malloc3" 00:14:58.658 } 00:14:58.658 ]' 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.658 17:56:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:58.658 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:58.658 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:58.658 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:58.658 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:58.658 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:58.658 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:58.917 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.176 [2024-10-25 17:56:17.530671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.176 [2024-10-25 17:56:17.563990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.176 [2024-10-25 17:56:17.565329] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.176 [2024-10-25 17:56:17.570593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.176 [2024-10-25 17:56:17.570847] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:59.176 [2024-10-25 17:56:17.570860] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.176 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.176 [2024-10-25 17:56:17.585652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.434 [2024-10-25 17:56:17.625587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.434 [2024-10-25 17:56:17.626477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.434 [2024-10-25 17:56:17.633585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.434 [2024-10-25 17:56:17.633852] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:59.434 [2024-10-25 17:56:17.633866] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.434 [2024-10-25 17:56:17.649666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.434 [2024-10-25 17:56:17.695095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.434 [2024-10-25 17:56:17.696179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.434 [2024-10-25 17:56:17.702613] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.434 [2024-10-25 17:56:17.702866] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:59.434 [2024-10-25 17:56:17.702876] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.434 [2024-10-25 17:56:17.717646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.434 [2024-10-25 17:56:17.744074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.434 [2024-10-25 17:56:17.745033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.434 [2024-10-25 17:56:17.749591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.434 [2024-10-25 17:56:17.749831] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:59.434 [2024-10-25 17:56:17.749843] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.434 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:59.693 [2024-10-25 17:56:17.949645] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:59.693 [2024-10-25 17:56:17.958041] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:59.693 [2024-10-25 17:56:17.958071] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:59.693 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:59.693 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.693 17:56:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:59.693 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.693 17:56:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.951 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.951 17:56:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.951 17:56:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:59.951 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.951 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.517 17:56:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:00.775 00:15:00.775 real 0m3.360s 00:15:00.775 user 0m0.875s 00:15:00.775 sys 0m0.133s 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.775 17:56:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.775 ************************************ 00:15:00.775 END TEST test_create_multi_ublk 00:15:00.775 ************************************ 00:15:00.775 17:56:19 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:00.775 17:56:19 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:00.775 17:56:19 ublk -- ublk/ublk.sh@130 -- # killprocess 70868 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@950 -- # '[' -z 70868 ']' 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@954 -- # kill -0 70868 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@955 -- # uname 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70868 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.775 killing process with pid 70868 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70868' 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@969 -- # kill 70868 00:15:00.775 17:56:19 ublk -- common/autotest_common.sh@974 -- # wait 70868 00:15:01.341 [2024-10-25 17:56:19.742193] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.341 [2024-10-25 17:56:19.742246] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:02.275 00:15:02.275 real 0m24.361s 00:15:02.275 user 0m35.148s 00:15:02.275 sys 0m9.290s 00:15:02.275 17:56:20 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.275 ************************************ 00:15:02.275 END TEST ublk 00:15:02.275 ************************************ 00:15:02.275 17:56:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:02.275 17:56:20 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:02.275 17:56:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:02.275 17:56:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.275 17:56:20 -- common/autotest_common.sh@10 -- # set +x 00:15:02.275 ************************************ 00:15:02.275 START TEST ublk_recovery 00:15:02.275 ************************************ 00:15:02.275 17:56:20 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:02.275 * Looking for test storage... 00:15:02.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:02.275 17:56:20 ublk_recovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:02.275 17:56:20 ublk_recovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:02.275 17:56:20 ublk_recovery -- common/autotest_common.sh@1689 -- # lcov --version 00:15:02.275 17:56:20 ublk_recovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:02.275 17:56:20 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.276 17:56:20 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:02.276 17:56:20 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.276 17:56:20 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.276 17:56:20 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.276 17:56:20 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:02.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.276 --rc genhtml_branch_coverage=1 00:15:02.276 --rc genhtml_function_coverage=1 00:15:02.276 --rc genhtml_legend=1 00:15:02.276 --rc geninfo_all_blocks=1 00:15:02.276 --rc geninfo_unexecuted_blocks=1 00:15:02.276 00:15:02.276 ' 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:02.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.276 --rc genhtml_branch_coverage=1 00:15:02.276 --rc genhtml_function_coverage=1 00:15:02.276 --rc genhtml_legend=1 00:15:02.276 --rc geninfo_all_blocks=1 00:15:02.276 --rc geninfo_unexecuted_blocks=1 00:15:02.276 00:15:02.276 ' 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:02.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.276 --rc genhtml_branch_coverage=1 00:15:02.276 --rc genhtml_function_coverage=1 00:15:02.276 --rc genhtml_legend=1 00:15:02.276 --rc geninfo_all_blocks=1 00:15:02.276 --rc geninfo_unexecuted_blocks=1 00:15:02.276 00:15:02.276 ' 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:02.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.276 --rc genhtml_branch_coverage=1 00:15:02.276 --rc genhtml_function_coverage=1 00:15:02.276 --rc genhtml_legend=1 00:15:02.276 --rc geninfo_all_blocks=1 00:15:02.276 --rc geninfo_unexecuted_blocks=1 00:15:02.276 00:15:02.276 ' 00:15:02.276 17:56:20 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:02.276 17:56:20 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:02.276 17:56:20 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:02.276 17:56:20 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71259 00:15:02.276 17:56:20 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:02.276 17:56:20 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71259 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71259 ']' 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.276 17:56:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.276 17:56:20 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:02.276 [2024-10-25 17:56:20.659091] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:15:02.276 [2024-10-25 17:56:20.659222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71259 ] 00:15:02.532 [2024-10-25 17:56:20.820048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:02.532 [2024-10-25 17:56:20.923180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.532 [2024-10-25 17:56:20.923392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.097 17:56:21 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.097 17:56:21 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:03.097 17:56:21 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:03.097 17:56:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.097 17:56:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.097 [2024-10-25 17:56:21.529625] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:03.097 [2024-10-25 17:56:21.531528] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.355 17:56:21 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.355 malloc0 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.355 17:56:21 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.355 [2024-10-25 17:56:21.633741] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:03.355 [2024-10-25 17:56:21.633844] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:03.355 [2024-10-25 17:56:21.633855] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:03.355 [2024-10-25 17:56:21.633865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:03.355 [2024-10-25 17:56:21.642682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:03.355 [2024-10-25 17:56:21.642706] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:03.355 [2024-10-25 17:56:21.649586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:03.355 [2024-10-25 17:56:21.649734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:03.355 [2024-10-25 17:56:21.660600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:03.355 1 00:15:03.355 17:56:21 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.355 17:56:21 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:04.292 17:56:22 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71297 00:15:04.292 17:56:22 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:04.292 17:56:22 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:04.551 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.551 fio-3.35 00:15:04.551 Starting 1 process 00:15:09.811 17:56:27 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71259 00:15:09.811 17:56:27 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:15.079 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71259 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:15.079 17:56:32 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71408 00:15:15.079 17:56:32 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:15.079 17:56:32 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.079 17:56:32 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71408 00:15:15.079 17:56:32 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71408 ']' 00:15:15.079 17:56:32 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.079 17:56:32 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.079 17:56:32 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.079 17:56:32 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.079 17:56:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.079 [2024-10-25 17:56:32.756753] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:15:15.079 [2024-10-25 17:56:32.757065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71408 ] 00:15:15.079 [2024-10-25 17:56:32.917265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:15.079 [2024-10-25 17:56:33.022439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.079 [2024-10-25 17:56:33.022458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:15.341 17:56:33 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.341 [2024-10-25 17:56:33.675580] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:15.341 [2024-10-25 17:56:33.677437] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.341 17:56:33 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.341 17:56:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.602 malloc0 00:15:15.602 17:56:33 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.602 17:56:33 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:15.603 17:56:33 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.603 17:56:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:15.603 [2024-10-25 17:56:33.780714] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:15.603 [2024-10-25 17:56:33.780754] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:15.603 [2024-10-25 17:56:33.780764] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:15.603 [2024-10-25 17:56:33.790605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:15.603 [2024-10-25 17:56:33.790628] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:15.603 1 00:15:15.603 17:56:33 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.603 17:56:33 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71297 00:15:16.575 [2024-10-25 17:56:34.790661] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:16.575 [2024-10-25 17:56:34.798572] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:16.575 [2024-10-25 17:56:34.798590] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:17.508 [2024-10-25 17:56:35.798629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:17.508 [2024-10-25 17:56:35.804594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:17.508 [2024-10-25 17:56:35.804627] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:18.441 [2024-10-25 17:56:36.804654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:18.441 [2024-10-25 17:56:36.805606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:18.441 [2024-10-25 17:56:36.805666] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:18.441 [2024-10-25 17:56:36.805691] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:18.442 [2024-10-25 17:56:36.805817] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:40.447 [2024-10-25 17:56:58.013589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:40.448 [2024-10-25 17:56:58.016674] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:40.448 [2024-10-25 17:56:58.022802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:40.448 [2024-10-25 17:56:58.022888] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:06.973 00:16:06.973 fio_test: (groupid=0, jobs=1): err= 0: pid=71300: Fri Oct 25 17:57:22 2024 00:16:06.973 read: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(3542MiB/60002msec) 00:16:06.973 slat (nsec): min=974, max=394134, avg=4928.89, stdev=1674.90 00:16:06.973 clat (usec): min=923, max=30358k, avg=4327.10, stdev=262854.56 00:16:06.973 lat (usec): min=934, max=30358k, avg=4332.02, stdev=262854.56 00:16:06.973 clat percentiles (usec): 00:16:06.973 | 1.00th=[ 1680], 5.00th=[ 1795], 10.00th=[ 1827], 20.00th=[ 1860], 00:16:06.973 | 30.00th=[ 1876], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:16:06.973 | 70.00th=[ 1958], 80.00th=[ 1991], 90.00th=[ 2073], 95.00th=[ 3064], 00:16:06.973 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 7111], 99.95th=[ 8029], 00:16:06.973 | 99.99th=[12649] 00:16:06.973 bw ( KiB/s): min=46120, max=129664, per=100.00%, avg=121047.44, stdev=14889.30, samples=59 00:16:06.973 iops : min=11530, max=32416, avg=30261.85, stdev=3722.32, samples=59 00:16:06.973 write: IOPS=15.1k, BW=59.0MiB/s (61.8MB/s)(3537MiB/60002msec); 0 zone resets 00:16:06.973 slat (nsec): min=1003, max=412046, avg=4954.71, stdev=1668.32 00:16:06.973 clat (usec): min=747, max=30358k, avg=4137.16, stdev=247081.06 00:16:06.973 lat (usec): min=758, max=30358k, avg=4142.12, stdev=247081.06 00:16:06.973 clat percentiles (usec): 00:16:06.973 | 1.00th=[ 1713], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1942], 00:16:06.973 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2024], 00:16:06.973 | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2147], 95.00th=[ 2966], 00:16:06.973 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 7242], 99.95th=[ 8160], 00:16:06.973 | 99.99th=[12780] 00:16:06.973 bw ( KiB/s): min=45744, max=129344, per=100.00%, avg=120877.51, stdev=14905.92, samples=59 00:16:06.973 iops : min=11436, max=32336, avg=30219.37, stdev=3726.48, samples=59 00:16:06.973 lat (usec) : 750=0.01%, 1000=0.01% 00:16:06.973 lat (msec) : 2=63.81%, 4=33.56%, 10=2.60%, 20=0.02%, >=2000=0.01% 00:16:06.973 cpu : usr=3.52%, sys=15.30%, ctx=62314, majf=0, minf=13 00:16:06.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:06.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.973 issued rwts: total=906801,905551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.973 00:16:06.973 Run status group 0 (all jobs): 00:16:06.973 READ: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=3542MiB (3714MB), run=60002-60002msec 00:16:06.973 WRITE: bw=59.0MiB/s (61.8MB/s), 59.0MiB/s-59.0MiB/s (61.8MB/s-61.8MB/s), io=3537MiB (3709MB), run=60002-60002msec 00:16:06.973 00:16:06.973 Disk stats (read/write): 00:16:06.973 ublkb1: ios=903512/902282, merge=0/0, ticks=3868482/3619156, in_queue=7487638, util=99.92% 00:16:06.973 17:57:22 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:06.973 17:57:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.973 17:57:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 [2024-10-25 17:57:22.927998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.973 [2024-10-25 17:57:22.967587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.973 [2024-10-25 17:57:22.967731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.973 [2024-10-25 17:57:22.975581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.973 [2024-10-25 17:57:22.975718] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:06.973 [2024-10-25 17:57:22.975774] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:06.973 17:57:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.973 17:57:22 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:06.973 17:57:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.973 17:57:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 [2024-10-25 17:57:22.991649] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.973 [2024-10-25 17:57:22.995337] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:06.973 [2024-10-25 17:57:22.995368] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:06.973 17:57:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.973 17:57:22 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:06.973 17:57:22 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:06.973 17:57:23 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71408 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71408 ']' 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71408 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71408 00:16:06.973 killing process with pid 71408 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71408' 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71408 00:16:06.973 17:57:23 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71408 00:16:06.973 [2024-10-25 17:57:24.069666] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.973 [2024-10-25 17:57:24.069713] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:06.973 00:16:06.973 real 1m4.326s 00:16:06.973 user 1m46.857s 00:16:06.973 sys 0m22.514s 00:16:06.973 17:57:24 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.973 17:57:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 ************************************ 00:16:06.973 END TEST ublk_recovery 00:16:06.973 ************************************ 00:16:06.973 17:57:24 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:06.973 17:57:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.973 17:57:24 -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 17:57:24 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:06.973 17:57:24 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:06.973 17:57:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:06.973 17:57:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.973 17:57:24 -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 ************************************ 00:16:06.973 START TEST ftl 00:16:06.973 ************************************ 00:16:06.973 17:57:24 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:06.973 * Looking for test storage... 00:16:06.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.973 17:57:24 ftl -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:06.973 17:57:24 ftl -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:06.973 17:57:24 ftl -- common/autotest_common.sh@1689 -- # lcov --version 00:16:06.973 17:57:24 ftl -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:06.973 17:57:24 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.973 17:57:24 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.974 17:57:24 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.974 17:57:24 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.974 17:57:24 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.974 17:57:24 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.974 17:57:24 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.974 17:57:24 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.974 17:57:24 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.974 17:57:24 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:06.974 17:57:24 ftl -- scripts/common.sh@345 -- # : 1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.974 17:57:24 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.974 17:57:24 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@353 -- # local d=1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.974 17:57:24 ftl -- scripts/common.sh@355 -- # echo 1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.974 17:57:24 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:06.974 17:57:24 ftl -- scripts/common.sh@353 -- # local d=2 00:16:06.974 17:57:24 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.974 17:57:24 ftl -- scripts/common.sh@355 -- # echo 2 00:16:06.974 17:57:24 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.974 17:57:24 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.974 17:57:24 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.974 17:57:24 ftl -- scripts/common.sh@368 -- # return 0 00:16:06.974 17:57:24 ftl -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.974 17:57:24 ftl -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.974 --rc genhtml_branch_coverage=1 00:16:06.974 --rc genhtml_function_coverage=1 00:16:06.974 --rc genhtml_legend=1 00:16:06.974 --rc geninfo_all_blocks=1 00:16:06.974 --rc geninfo_unexecuted_blocks=1 00:16:06.974 00:16:06.974 ' 00:16:06.974 17:57:24 ftl -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.974 --rc genhtml_branch_coverage=1 00:16:06.974 --rc genhtml_function_coverage=1 00:16:06.974 --rc genhtml_legend=1 00:16:06.974 --rc geninfo_all_blocks=1 00:16:06.974 --rc geninfo_unexecuted_blocks=1 00:16:06.974 00:16:06.974 ' 00:16:06.974 17:57:24 ftl -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.974 --rc genhtml_branch_coverage=1 00:16:06.974 --rc genhtml_function_coverage=1 00:16:06.974 --rc genhtml_legend=1 00:16:06.974 --rc geninfo_all_blocks=1 00:16:06.974 --rc geninfo_unexecuted_blocks=1 00:16:06.974 00:16:06.974 ' 00:16:06.974 17:57:24 ftl -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.974 --rc genhtml_branch_coverage=1 00:16:06.974 --rc genhtml_function_coverage=1 00:16:06.974 --rc genhtml_legend=1 00:16:06.974 --rc geninfo_all_blocks=1 00:16:06.974 --rc geninfo_unexecuted_blocks=1 00:16:06.974 00:16:06.974 ' 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:06.974 17:57:24 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:06.974 17:57:24 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.974 17:57:24 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:06.974 17:57:24 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:06.974 17:57:24 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:06.974 17:57:24 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.974 17:57:24 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:06.974 17:57:24 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:06.974 17:57:24 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.974 17:57:24 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.974 17:57:24 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:06.974 17:57:24 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:06.974 17:57:24 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.974 17:57:24 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:06.974 17:57:24 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:06.974 17:57:24 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:06.974 17:57:24 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.974 17:57:24 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.974 17:57:24 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:06.974 17:57:24 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:06.974 17:57:24 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.974 17:57:24 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:06.974 17:57:24 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.974 17:57:24 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:06.974 17:57:24 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:06.974 17:57:24 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:06.974 17:57:24 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.974 17:57:24 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:06.974 17:57:24 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:06.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:06.974 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:06.974 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:06.974 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:06.974 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:06.974 17:57:25 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72211 00:16:06.974 17:57:25 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72211 00:16:06.974 17:57:25 ftl -- common/autotest_common.sh@831 -- # '[' -z 72211 ']' 00:16:06.974 17:57:25 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:06.974 17:57:25 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.974 17:57:25 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.974 17:57:25 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.974 17:57:25 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.974 17:57:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:07.232 [2024-10-25 17:57:25.475784] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:16:07.232 [2024-10-25 17:57:25.476061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72211 ] 00:16:07.232 [2024-10-25 17:57:25.628294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.489 [2024-10-25 17:57:25.726981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.055 17:57:26 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.055 17:57:26 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:08.055 17:57:26 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:08.313 17:57:26 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:08.878 17:57:27 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:08.878 17:57:27 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:09.446 17:57:27 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:09.446 17:57:27 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:09.446 17:57:27 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@50 -- # break 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:09.704 17:57:27 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:09.962 17:57:28 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:09.962 17:57:28 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:09.962 17:57:28 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:09.962 17:57:28 ftl -- ftl/ftl.sh@63 -- # break 00:16:09.962 17:57:28 ftl -- ftl/ftl.sh@66 -- # killprocess 72211 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@950 -- # '[' -z 72211 ']' 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@954 -- # kill -0 72211 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@955 -- # uname 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72211 00:16:09.962 killing process with pid 72211 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72211' 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@969 -- # kill 72211 00:16:09.962 17:57:28 ftl -- common/autotest_common.sh@974 -- # wait 72211 00:16:11.343 17:57:29 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:11.343 17:57:29 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:11.343 17:57:29 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:11.343 17:57:29 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:11.343 17:57:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:11.343 ************************************ 00:16:11.343 START TEST ftl_fio_basic 00:16:11.343 ************************************ 00:16:11.343 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:11.343 * Looking for test storage... 00:16:11.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lcov --version 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.603 --rc genhtml_branch_coverage=1 00:16:11.603 --rc genhtml_function_coverage=1 00:16:11.603 --rc genhtml_legend=1 00:16:11.603 --rc geninfo_all_blocks=1 00:16:11.603 --rc geninfo_unexecuted_blocks=1 00:16:11.603 00:16:11.603 ' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.603 --rc genhtml_branch_coverage=1 00:16:11.603 --rc genhtml_function_coverage=1 00:16:11.603 --rc genhtml_legend=1 00:16:11.603 --rc geninfo_all_blocks=1 00:16:11.603 --rc geninfo_unexecuted_blocks=1 00:16:11.603 00:16:11.603 ' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.603 --rc genhtml_branch_coverage=1 00:16:11.603 --rc genhtml_function_coverage=1 00:16:11.603 --rc genhtml_legend=1 00:16:11.603 --rc geninfo_all_blocks=1 00:16:11.603 --rc geninfo_unexecuted_blocks=1 00:16:11.603 00:16:11.603 ' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.603 --rc genhtml_branch_coverage=1 00:16:11.603 --rc genhtml_function_coverage=1 00:16:11.603 --rc genhtml_legend=1 00:16:11.603 --rc geninfo_all_blocks=1 00:16:11.603 --rc geninfo_unexecuted_blocks=1 00:16:11.603 00:16:11.603 ' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:11.603 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72355 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72355 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72355 ']' 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.604 17:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:11.604 [2024-10-25 17:57:29.943150] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:16:11.604 [2024-10-25 17:57:29.943435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72355 ] 00:16:11.863 [2024-10-25 17:57:30.101409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.863 [2024-10-25 17:57:30.206369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.863 [2024-10-25 17:57:30.206720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.863 [2024-10-25 17:57:30.206972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:12.435 17:57:30 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:12.696 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:12.956 { 00:16:12.956 "name": "nvme0n1", 00:16:12.956 "aliases": [ 00:16:12.956 "8e5e8879-01de-4b0c-9b6a-89bb8aec3cac" 00:16:12.956 ], 00:16:12.956 "product_name": "NVMe disk", 00:16:12.956 "block_size": 4096, 00:16:12.956 "num_blocks": 1310720, 00:16:12.956 "uuid": "8e5e8879-01de-4b0c-9b6a-89bb8aec3cac", 00:16:12.956 "numa_id": -1, 00:16:12.956 "assigned_rate_limits": { 00:16:12.956 "rw_ios_per_sec": 0, 00:16:12.956 "rw_mbytes_per_sec": 0, 00:16:12.956 "r_mbytes_per_sec": 0, 00:16:12.956 "w_mbytes_per_sec": 0 00:16:12.956 }, 00:16:12.956 "claimed": false, 00:16:12.956 "zoned": false, 00:16:12.956 "supported_io_types": { 00:16:12.956 "read": true, 00:16:12.956 "write": true, 00:16:12.956 "unmap": true, 00:16:12.956 "flush": true, 00:16:12.956 "reset": true, 00:16:12.956 "nvme_admin": true, 00:16:12.956 "nvme_io": true, 00:16:12.956 "nvme_io_md": false, 00:16:12.956 "write_zeroes": true, 00:16:12.956 "zcopy": false, 00:16:12.956 "get_zone_info": false, 00:16:12.956 "zone_management": false, 00:16:12.956 "zone_append": false, 00:16:12.956 "compare": true, 00:16:12.956 "compare_and_write": false, 00:16:12.956 "abort": true, 00:16:12.956 "seek_hole": false, 00:16:12.956 "seek_data": false, 00:16:12.956 "copy": true, 00:16:12.956 "nvme_iov_md": false 00:16:12.956 }, 00:16:12.956 "driver_specific": { 00:16:12.956 "nvme": [ 00:16:12.956 { 00:16:12.956 "pci_address": "0000:00:11.0", 00:16:12.956 "trid": { 00:16:12.956 "trtype": "PCIe", 00:16:12.956 "traddr": "0000:00:11.0" 00:16:12.956 }, 00:16:12.956 "ctrlr_data": { 00:16:12.956 "cntlid": 0, 00:16:12.956 "vendor_id": "0x1b36", 00:16:12.956 "model_number": "QEMU NVMe Ctrl", 00:16:12.956 "serial_number": "12341", 00:16:12.956 "firmware_revision": "8.0.0", 00:16:12.956 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:12.956 "oacs": { 00:16:12.956 "security": 0, 00:16:12.956 "format": 1, 00:16:12.956 "firmware": 0, 00:16:12.956 "ns_manage": 1 00:16:12.956 }, 00:16:12.956 "multi_ctrlr": false, 00:16:12.956 "ana_reporting": false 00:16:12.956 }, 00:16:12.956 "vs": { 00:16:12.956 "nvme_version": "1.4" 00:16:12.956 }, 00:16:12.956 "ns_data": { 00:16:12.956 "id": 1, 00:16:12.956 "can_share": false 00:16:12.956 } 00:16:12.956 } 00:16:12.956 ], 00:16:12.956 "mp_policy": "active_passive" 00:16:12.956 } 00:16:12.956 } 00:16:12.956 ]' 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:12.956 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:13.218 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:13.218 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:13.479 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e056a435-a50c-4954-aebe-0bcc64e9d97a 00:16:13.479 17:57:31 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e056a435-a50c-4954-aebe-0bcc64e9d97a 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:13.741 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:14.002 { 00:16:14.002 "name": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:14.002 "aliases": [ 00:16:14.002 "lvs/nvme0n1p0" 00:16:14.002 ], 00:16:14.002 "product_name": "Logical Volume", 00:16:14.002 "block_size": 4096, 00:16:14.002 "num_blocks": 26476544, 00:16:14.002 "uuid": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:14.002 "assigned_rate_limits": { 00:16:14.002 "rw_ios_per_sec": 0, 00:16:14.002 "rw_mbytes_per_sec": 0, 00:16:14.002 "r_mbytes_per_sec": 0, 00:16:14.002 "w_mbytes_per_sec": 0 00:16:14.002 }, 00:16:14.002 "claimed": false, 00:16:14.002 "zoned": false, 00:16:14.002 "supported_io_types": { 00:16:14.002 "read": true, 00:16:14.002 "write": true, 00:16:14.002 "unmap": true, 00:16:14.002 "flush": false, 00:16:14.002 "reset": true, 00:16:14.002 "nvme_admin": false, 00:16:14.002 "nvme_io": false, 00:16:14.002 "nvme_io_md": false, 00:16:14.002 "write_zeroes": true, 00:16:14.002 "zcopy": false, 00:16:14.002 "get_zone_info": false, 00:16:14.002 "zone_management": false, 00:16:14.002 "zone_append": false, 00:16:14.002 "compare": false, 00:16:14.002 "compare_and_write": false, 00:16:14.002 "abort": false, 00:16:14.002 "seek_hole": true, 00:16:14.002 "seek_data": true, 00:16:14.002 "copy": false, 00:16:14.002 "nvme_iov_md": false 00:16:14.002 }, 00:16:14.002 "driver_specific": { 00:16:14.002 "lvol": { 00:16:14.002 "lvol_store_uuid": "e056a435-a50c-4954-aebe-0bcc64e9d97a", 00:16:14.002 "base_bdev": "nvme0n1", 00:16:14.002 "thin_provision": true, 00:16:14.002 "num_allocated_clusters": 0, 00:16:14.002 "snapshot": false, 00:16:14.002 "clone": false, 00:16:14.002 "esnap_clone": false 00:16:14.002 } 00:16:14.002 } 00:16:14.002 } 00:16:14.002 ]' 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:14.002 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:14.262 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:14.524 { 00:16:14.524 "name": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:14.524 "aliases": [ 00:16:14.524 "lvs/nvme0n1p0" 00:16:14.524 ], 00:16:14.524 "product_name": "Logical Volume", 00:16:14.524 "block_size": 4096, 00:16:14.524 "num_blocks": 26476544, 00:16:14.524 "uuid": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:14.524 "assigned_rate_limits": { 00:16:14.524 "rw_ios_per_sec": 0, 00:16:14.524 "rw_mbytes_per_sec": 0, 00:16:14.524 "r_mbytes_per_sec": 0, 00:16:14.524 "w_mbytes_per_sec": 0 00:16:14.524 }, 00:16:14.524 "claimed": false, 00:16:14.524 "zoned": false, 00:16:14.524 "supported_io_types": { 00:16:14.524 "read": true, 00:16:14.524 "write": true, 00:16:14.524 "unmap": true, 00:16:14.524 "flush": false, 00:16:14.524 "reset": true, 00:16:14.524 "nvme_admin": false, 00:16:14.524 "nvme_io": false, 00:16:14.524 "nvme_io_md": false, 00:16:14.524 "write_zeroes": true, 00:16:14.524 "zcopy": false, 00:16:14.524 "get_zone_info": false, 00:16:14.524 "zone_management": false, 00:16:14.524 "zone_append": false, 00:16:14.524 "compare": false, 00:16:14.524 "compare_and_write": false, 00:16:14.524 "abort": false, 00:16:14.524 "seek_hole": true, 00:16:14.524 "seek_data": true, 00:16:14.524 "copy": false, 00:16:14.524 "nvme_iov_md": false 00:16:14.524 }, 00:16:14.524 "driver_specific": { 00:16:14.524 "lvol": { 00:16:14.524 "lvol_store_uuid": "e056a435-a50c-4954-aebe-0bcc64e9d97a", 00:16:14.524 "base_bdev": "nvme0n1", 00:16:14.524 "thin_provision": true, 00:16:14.524 "num_allocated_clusters": 0, 00:16:14.524 "snapshot": false, 00:16:14.524 "clone": false, 00:16:14.524 "esnap_clone": false 00:16:14.524 } 00:16:14.524 } 00:16:14.524 } 00:16:14.524 ]' 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:14.524 17:57:32 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:14.783 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:14.783 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0370f55-bd11-4beb-b79f-8c75323a6e7b 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:15.042 { 00:16:15.042 "name": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:15.042 "aliases": [ 00:16:15.042 "lvs/nvme0n1p0" 00:16:15.042 ], 00:16:15.042 "product_name": "Logical Volume", 00:16:15.042 "block_size": 4096, 00:16:15.042 "num_blocks": 26476544, 00:16:15.042 "uuid": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:15.042 "assigned_rate_limits": { 00:16:15.042 "rw_ios_per_sec": 0, 00:16:15.042 "rw_mbytes_per_sec": 0, 00:16:15.042 "r_mbytes_per_sec": 0, 00:16:15.042 "w_mbytes_per_sec": 0 00:16:15.042 }, 00:16:15.042 "claimed": false, 00:16:15.042 "zoned": false, 00:16:15.042 "supported_io_types": { 00:16:15.042 "read": true, 00:16:15.042 "write": true, 00:16:15.042 "unmap": true, 00:16:15.042 "flush": false, 00:16:15.042 "reset": true, 00:16:15.042 "nvme_admin": false, 00:16:15.042 "nvme_io": false, 00:16:15.042 "nvme_io_md": false, 00:16:15.042 "write_zeroes": true, 00:16:15.042 "zcopy": false, 00:16:15.042 "get_zone_info": false, 00:16:15.042 "zone_management": false, 00:16:15.042 "zone_append": false, 00:16:15.042 "compare": false, 00:16:15.042 "compare_and_write": false, 00:16:15.042 "abort": false, 00:16:15.042 "seek_hole": true, 00:16:15.042 "seek_data": true, 00:16:15.042 "copy": false, 00:16:15.042 "nvme_iov_md": false 00:16:15.042 }, 00:16:15.042 "driver_specific": { 00:16:15.042 "lvol": { 00:16:15.042 "lvol_store_uuid": "e056a435-a50c-4954-aebe-0bcc64e9d97a", 00:16:15.042 "base_bdev": "nvme0n1", 00:16:15.042 "thin_provision": true, 00:16:15.042 "num_allocated_clusters": 0, 00:16:15.042 "snapshot": false, 00:16:15.042 "clone": false, 00:16:15.042 "esnap_clone": false 00:16:15.042 } 00:16:15.042 } 00:16:15.042 } 00:16:15.042 ]' 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:15.042 17:57:33 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b0370f55-bd11-4beb-b79f-8c75323a6e7b -c nvc0n1p0 --l2p_dram_limit 60 00:16:15.302 [2024-10-25 17:57:33.535129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.535314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:15.302 [2024-10-25 17:57:33.535338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:15.302 [2024-10-25 17:57:33.535347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.535412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.535423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:15.302 [2024-10-25 17:57:33.535434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:15.302 [2024-10-25 17:57:33.535444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.535482] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:15.302 [2024-10-25 17:57:33.536194] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:15.302 [2024-10-25 17:57:33.536218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.536225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:15.302 [2024-10-25 17:57:33.536235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:16:15.302 [2024-10-25 17:57:33.536243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.536288] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 445fb24a-3ec0-4107-ba7a-361e3e7b8b6d 00:16:15.302 [2024-10-25 17:57:33.537475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.537610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:15.302 [2024-10-25 17:57:33.537623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:15.302 [2024-10-25 17:57:33.537632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.542861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.542893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:15.302 [2024-10-25 17:57:33.542903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:16:15.302 [2024-10-25 17:57:33.542912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.543006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.543019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:15.302 [2024-10-25 17:57:33.543028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:15.302 [2024-10-25 17:57:33.543040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.543091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.543102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:15.302 [2024-10-25 17:57:33.543110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:15.302 [2024-10-25 17:57:33.543119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.543150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:15.302 [2024-10-25 17:57:33.546749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.546779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:15.302 [2024-10-25 17:57:33.546793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:16:15.302 [2024-10-25 17:57:33.546803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.546844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.546857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:15.302 [2024-10-25 17:57:33.546867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:15.302 [2024-10-25 17:57:33.546876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.546908] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:15.302 [2024-10-25 17:57:33.547056] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:15.302 [2024-10-25 17:57:33.547076] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:15.302 [2024-10-25 17:57:33.547088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:15.302 [2024-10-25 17:57:33.547102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:15.302 [2024-10-25 17:57:33.547112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:15.302 [2024-10-25 17:57:33.547122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:15.302 [2024-10-25 17:57:33.547132] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:15.302 [2024-10-25 17:57:33.547142] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:15.302 [2024-10-25 17:57:33.547150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:15.302 [2024-10-25 17:57:33.547162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.547171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:15.302 [2024-10-25 17:57:33.547184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:16:15.302 [2024-10-25 17:57:33.547193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.547286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.302 [2024-10-25 17:57:33.547296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:15.302 [2024-10-25 17:57:33.547305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:15.302 [2024-10-25 17:57:33.547313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.302 [2024-10-25 17:57:33.547442] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:15.302 [2024-10-25 17:57:33.547456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:15.302 [2024-10-25 17:57:33.547466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.302 [2024-10-25 17:57:33.547476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.302 [2024-10-25 17:57:33.547488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:15.303 [2024-10-25 17:57:33.547496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:15.303 [2024-10-25 17:57:33.547525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.303 [2024-10-25 17:57:33.547542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:15.303 [2024-10-25 17:57:33.547551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:15.303 [2024-10-25 17:57:33.547582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.303 [2024-10-25 17:57:33.547590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:15.303 [2024-10-25 17:57:33.547598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:15.303 [2024-10-25 17:57:33.547605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:15.303 [2024-10-25 17:57:33.547624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:15.303 [2024-10-25 17:57:33.547648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:15.303 [2024-10-25 17:57:33.547670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:15.303 [2024-10-25 17:57:33.547693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:15.303 [2024-10-25 17:57:33.547714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:15.303 [2024-10-25 17:57:33.547739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.303 [2024-10-25 17:57:33.547755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:15.303 [2024-10-25 17:57:33.547773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:15.303 [2024-10-25 17:57:33.547783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.303 [2024-10-25 17:57:33.547789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:15.303 [2024-10-25 17:57:33.547801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:15.303 [2024-10-25 17:57:33.547808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:15.303 [2024-10-25 17:57:33.547822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:15.303 [2024-10-25 17:57:33.547831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547838] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:15.303 [2024-10-25 17:57:33.547848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:15.303 [2024-10-25 17:57:33.547854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.303 [2024-10-25 17:57:33.547871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:15.303 [2024-10-25 17:57:33.547882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:15.303 [2024-10-25 17:57:33.547888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:15.303 [2024-10-25 17:57:33.547897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:15.303 [2024-10-25 17:57:33.547914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:15.303 [2024-10-25 17:57:33.547922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:15.303 [2024-10-25 17:57:33.547932] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:15.303 [2024-10-25 17:57:33.547944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.547953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:15.303 [2024-10-25 17:57:33.547962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:15.303 [2024-10-25 17:57:33.547969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:15.303 [2024-10-25 17:57:33.547978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:15.303 [2024-10-25 17:57:33.547985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:15.303 [2024-10-25 17:57:33.547993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:15.303 [2024-10-25 17:57:33.548000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:15.303 [2024-10-25 17:57:33.548009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:15.303 [2024-10-25 17:57:33.548016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:15.303 [2024-10-25 17:57:33.548027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.548034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.548044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.548051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.548060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:15.303 [2024-10-25 17:57:33.548068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:15.303 [2024-10-25 17:57:33.548079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.548087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:15.303 [2024-10-25 17:57:33.548096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:15.303 [2024-10-25 17:57:33.548103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:15.303 [2024-10-25 17:57:33.548111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:15.303 [2024-10-25 17:57:33.548119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.303 [2024-10-25 17:57:33.548128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:15.303 [2024-10-25 17:57:33.548137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:16:15.303 [2024-10-25 17:57:33.548146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.303 [2024-10-25 17:57:33.548200] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:15.303 [2024-10-25 17:57:33.548213] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:19.495 [2024-10-25 17:57:37.544334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.544388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:19.495 [2024-10-25 17:57:37.544403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3996.117 ms 00:16:19.495 [2024-10-25 17:57:37.544416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.569646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.569692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.495 [2024-10-25 17:57:37.569705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.013 ms 00:16:19.495 [2024-10-25 17:57:37.569715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.569848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.569861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:19.495 [2024-10-25 17:57:37.569870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:19.495 [2024-10-25 17:57:37.569881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.608084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.608128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.495 [2024-10-25 17:57:37.608140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.158 ms 00:16:19.495 [2024-10-25 17:57:37.608154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.608198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.608209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:19.495 [2024-10-25 17:57:37.608218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:19.495 [2024-10-25 17:57:37.608227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.608629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.608649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:19.495 [2024-10-25 17:57:37.608659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:16:19.495 [2024-10-25 17:57:37.608669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.608795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.608806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:19.495 [2024-10-25 17:57:37.608814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:19.495 [2024-10-25 17:57:37.608825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.625243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.625381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:19.495 [2024-10-25 17:57:37.625396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.396 ms 00:16:19.495 [2024-10-25 17:57:37.625406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.636757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:19.495 [2024-10-25 17:57:37.650578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.650618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:19.495 [2024-10-25 17:57:37.650632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.076 ms 00:16:19.495 [2024-10-25 17:57:37.650640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.709325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.709373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:19.495 [2024-10-25 17:57:37.709388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.647 ms 00:16:19.495 [2024-10-25 17:57:37.709396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.709599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.709611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:19.495 [2024-10-25 17:57:37.709623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:16:19.495 [2024-10-25 17:57:37.709630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.732327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.732491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:19.495 [2024-10-25 17:57:37.732511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.635 ms 00:16:19.495 [2024-10-25 17:57:37.732522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.495 [2024-10-25 17:57:37.754734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.495 [2024-10-25 17:57:37.754849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:19.495 [2024-10-25 17:57:37.754868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.158 ms 00:16:19.496 [2024-10-25 17:57:37.754875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.755426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.755445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:19.496 [2024-10-25 17:57:37.755455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:16:19.496 [2024-10-25 17:57:37.755463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.824003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.824040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:19.496 [2024-10-25 17:57:37.824057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.501 ms 00:16:19.496 [2024-10-25 17:57:37.824066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.847812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.847844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:19.496 [2024-10-25 17:57:37.847858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.663 ms 00:16:19.496 [2024-10-25 17:57:37.847866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.870193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.870321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:19.496 [2024-10-25 17:57:37.870340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.285 ms 00:16:19.496 [2024-10-25 17:57:37.870347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.892982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.893098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:19.496 [2024-10-25 17:57:37.893117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.599 ms 00:16:19.496 [2024-10-25 17:57:37.893124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.893168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.893179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:19.496 [2024-10-25 17:57:37.893192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:19.496 [2024-10-25 17:57:37.893199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.893283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.496 [2024-10-25 17:57:37.893293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:19.496 [2024-10-25 17:57:37.893303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:19.496 [2024-10-25 17:57:37.893310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.496 [2024-10-25 17:57:37.894212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4358.676 ms, result 0 00:16:19.496 { 00:16:19.496 "name": "ftl0", 00:16:19.496 "uuid": "445fb24a-3ec0-4107-ba7a-361e3e7b8b6d" 00:16:19.496 } 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:19.496 17:57:37 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:19.754 17:57:38 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:20.012 [ 00:16:20.012 { 00:16:20.012 "name": "ftl0", 00:16:20.012 "aliases": [ 00:16:20.012 "445fb24a-3ec0-4107-ba7a-361e3e7b8b6d" 00:16:20.012 ], 00:16:20.012 "product_name": "FTL disk", 00:16:20.012 "block_size": 4096, 00:16:20.012 "num_blocks": 20971520, 00:16:20.012 "uuid": "445fb24a-3ec0-4107-ba7a-361e3e7b8b6d", 00:16:20.012 "assigned_rate_limits": { 00:16:20.012 "rw_ios_per_sec": 0, 00:16:20.012 "rw_mbytes_per_sec": 0, 00:16:20.012 "r_mbytes_per_sec": 0, 00:16:20.012 "w_mbytes_per_sec": 0 00:16:20.012 }, 00:16:20.012 "claimed": false, 00:16:20.012 "zoned": false, 00:16:20.012 "supported_io_types": { 00:16:20.012 "read": true, 00:16:20.012 "write": true, 00:16:20.012 "unmap": true, 00:16:20.012 "flush": true, 00:16:20.012 "reset": false, 00:16:20.012 "nvme_admin": false, 00:16:20.012 "nvme_io": false, 00:16:20.012 "nvme_io_md": false, 00:16:20.012 "write_zeroes": true, 00:16:20.012 "zcopy": false, 00:16:20.012 "get_zone_info": false, 00:16:20.012 "zone_management": false, 00:16:20.012 "zone_append": false, 00:16:20.012 "compare": false, 00:16:20.012 "compare_and_write": false, 00:16:20.012 "abort": false, 00:16:20.012 "seek_hole": false, 00:16:20.012 "seek_data": false, 00:16:20.012 "copy": false, 00:16:20.012 "nvme_iov_md": false 00:16:20.012 }, 00:16:20.012 "driver_specific": { 00:16:20.012 "ftl": { 00:16:20.012 "base_bdev": "b0370f55-bd11-4beb-b79f-8c75323a6e7b", 00:16:20.012 "cache": "nvc0n1p0" 00:16:20.012 } 00:16:20.012 } 00:16:20.012 } 00:16:20.012 ] 00:16:20.012 17:57:38 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:16:20.012 17:57:38 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:20.012 17:57:38 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:20.269 17:57:38 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:20.269 17:57:38 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:20.529 [2024-10-25 17:57:38.726962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.727016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:20.529 [2024-10-25 17:57:38.727030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:20.529 [2024-10-25 17:57:38.727041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.727070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:20.529 [2024-10-25 17:57:38.729644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.729675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:20.529 [2024-10-25 17:57:38.729687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:16:20.529 [2024-10-25 17:57:38.729695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.730103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.730121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:20.529 [2024-10-25 17:57:38.730131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:16:20.529 [2024-10-25 17:57:38.730139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.733378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.733396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:20.529 [2024-10-25 17:57:38.733411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:16:20.529 [2024-10-25 17:57:38.733419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.739586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.739621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:20.529 [2024-10-25 17:57:38.739633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:16:20.529 [2024-10-25 17:57:38.739641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.762426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.762459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:20.529 [2024-10-25 17:57:38.762472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.703 ms 00:16:20.529 [2024-10-25 17:57:38.762480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.777025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.777057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:20.529 [2024-10-25 17:57:38.777070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.487 ms 00:16:20.529 [2024-10-25 17:57:38.777079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.777261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.777276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:20.529 [2024-10-25 17:57:38.777286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:20.529 [2024-10-25 17:57:38.777293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.799629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.799658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:20.529 [2024-10-25 17:57:38.799670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.310 ms 00:16:20.529 [2024-10-25 17:57:38.799678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.862841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.862870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:20.529 [2024-10-25 17:57:38.862882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.121 ms 00:16:20.529 [2024-10-25 17:57:38.862890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.884975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.885004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:20.529 [2024-10-25 17:57:38.885016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.045 ms 00:16:20.529 [2024-10-25 17:57:38.885024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.907021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.529 [2024-10-25 17:57:38.907052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:20.529 [2024-10-25 17:57:38.907064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.908 ms 00:16:20.529 [2024-10-25 17:57:38.907072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.529 [2024-10-25 17:57:38.907112] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:20.529 [2024-10-25 17:57:38.907126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:20.529 [2024-10-25 17:57:38.907429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.907993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:20.530 [2024-10-25 17:57:38.908008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:20.530 [2024-10-25 17:57:38.908023] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 445fb24a-3ec0-4107-ba7a-361e3e7b8b6d 00:16:20.530 [2024-10-25 17:57:38.908030] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:20.530 [2024-10-25 17:57:38.908041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:20.530 [2024-10-25 17:57:38.908047] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:20.530 [2024-10-25 17:57:38.908056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:20.530 [2024-10-25 17:57:38.908063] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:20.530 [2024-10-25 17:57:38.908074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:20.530 [2024-10-25 17:57:38.908081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:20.530 [2024-10-25 17:57:38.908089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:20.530 [2024-10-25 17:57:38.908095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:20.530 [2024-10-25 17:57:38.908104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.530 [2024-10-25 17:57:38.908111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:20.530 [2024-10-25 17:57:38.908120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:16:20.530 [2024-10-25 17:57:38.908127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.530 [2024-10-25 17:57:38.920443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.530 [2024-10-25 17:57:38.920472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:20.530 [2024-10-25 17:57:38.920484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:16:20.530 [2024-10-25 17:57:38.920494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.530 [2024-10-25 17:57:38.920852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.530 [2024-10-25 17:57:38.920872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:20.530 [2024-10-25 17:57:38.920882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:16:20.530 [2024-10-25 17:57:38.920889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:38.964033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:38.964067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:20.789 [2024-10-25 17:57:38.964082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:38.964090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:38.964153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:38.964161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:20.789 [2024-10-25 17:57:38.964171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:38.964179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:38.964257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:38.964267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:20.789 [2024-10-25 17:57:38.964278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:38.964287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:38.964313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:38.964320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:20.789 [2024-10-25 17:57:38.964330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:38.964337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.045268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.045319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:20.789 [2024-10-25 17:57:39.045334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.045345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.108678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.108724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:20.789 [2024-10-25 17:57:39.108738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.108746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.108838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.108853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:20.789 [2024-10-25 17:57:39.108864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.108872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.108937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.108947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:20.789 [2024-10-25 17:57:39.108956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.108964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.109073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.109083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:20.789 [2024-10-25 17:57:39.109093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.109100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.109149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.109160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:20.789 [2024-10-25 17:57:39.109171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.109178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.109219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.109227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:20.789 [2024-10-25 17:57:39.109236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.109243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.109296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:20.789 [2024-10-25 17:57:39.109305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:20.789 [2024-10-25 17:57:39.109314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:20.789 [2024-10-25 17:57:39.109321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.789 [2024-10-25 17:57:39.109486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.498 ms, result 0 00:16:20.789 true 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72355 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72355 ']' 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72355 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72355 00:16:20.789 killing process with pid 72355 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72355' 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72355 00:16:20.789 17:57:39 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72355 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:23.322 17:57:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:23.322 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:23.322 fio-3.35 00:16:23.322 Starting 1 thread 00:16:27.511 00:16:27.511 test: (groupid=0, jobs=1): err= 0: pid=72550: Fri Oct 25 17:57:45 2024 00:16:27.511 read: IOPS=1329, BW=88.3MiB/s (92.6MB/s)(255MiB/2882msec) 00:16:27.512 slat (nsec): min=3765, max=20518, avg=4767.84, stdev=1989.82 00:16:27.512 clat (usec): min=231, max=757, avg=337.70, stdev=58.75 00:16:27.512 lat (usec): min=235, max=768, avg=342.47, stdev=59.43 00:16:27.512 clat percentiles (usec): 00:16:27.512 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 314], 00:16:27.512 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 322], 00:16:27.512 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 412], 95.00th=[ 465], 00:16:27.512 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[ 685], 99.95th=[ 717], 00:16:27.512 | 99.99th=[ 758] 00:16:27.512 write: IOPS=1339, BW=88.9MiB/s (93.3MB/s)(256MiB/2879msec); 0 zone resets 00:16:27.512 slat (usec): min=17, max=106, avg=21.22, stdev= 4.19 00:16:27.512 clat (usec): min=282, max=1876, avg=371.73, stdev=91.22 00:16:27.512 lat (usec): min=302, max=1902, avg=392.95, stdev=91.44 00:16:27.512 clat percentiles (usec): 00:16:27.512 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 334], 00:16:27.512 | 30.00th=[ 343], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 347], 00:16:27.512 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 424], 95.00th=[ 553], 00:16:27.512 | 99.00th=[ 791], 99.50th=[ 873], 99.90th=[ 1123], 99.95th=[ 1188], 00:16:27.512 | 99.99th=[ 1876] 00:16:27.512 bw ( KiB/s): min=73712, max=96696, per=99.54%, avg=90657.60, stdev=9534.37, samples=5 00:16:27.512 iops : min= 1084, max= 1422, avg=1333.20, stdev=140.21, samples=5 00:16:27.512 lat (usec) : 250=0.08%, 500=94.63%, 750=4.64%, 1000=0.56% 00:16:27.512 lat (msec) : 2=0.09% 00:16:27.512 cpu : usr=99.34%, sys=0.03%, ctx=3, majf=0, minf=1169 00:16:27.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.512 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.512 00:16:27.512 Run status group 0 (all jobs): 00:16:27.512 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=255MiB (267MB), run=2882-2882msec 00:16:27.512 WRITE: bw=88.9MiB/s (93.3MB/s), 88.9MiB/s-88.9MiB/s (93.3MB/s-93.3MB/s), io=256MiB (269MB), run=2879-2879msec 00:16:28.444 ----------------------------------------------------- 00:16:28.444 Suppressions used: 00:16:28.444 count bytes template 00:16:28.444 1 5 /usr/src/fio/parse.c 00:16:28.444 1 8 libtcmalloc_minimal.so 00:16:28.444 1 904 libcrypto.so 00:16:28.444 ----------------------------------------------------- 00:16:28.444 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:28.444 17:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:28.701 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:28.701 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:28.701 fio-3.35 00:16:28.702 Starting 2 threads 00:16:55.304 00:16:55.304 first_half: (groupid=0, jobs=1): err= 0: pid=72642: Fri Oct 25 17:58:10 2024 00:16:55.304 read: IOPS=2902, BW=11.3MiB/s (11.9MB/s)(255MiB/22461msec) 00:16:55.304 slat (nsec): min=2987, max=43824, avg=3861.16, stdev=1006.20 00:16:55.304 clat (usec): min=592, max=275822, avg=33026.88, stdev=15642.20 00:16:55.304 lat (usec): min=597, max=275826, avg=33030.75, stdev=15642.19 00:16:55.304 clat percentiles (msec): 00:16:55.304 | 1.00th=[ 4], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:16:55.304 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:55.304 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 40], 00:16:55.304 | 99.00th=[ 118], 99.50th=[ 144], 99.90th=[ 197], 99.95th=[ 232], 00:16:55.304 | 99.99th=[ 271] 00:16:55.304 write: IOPS=4078, BW=15.9MiB/s (16.7MB/s)(256MiB/16069msec); 0 zone resets 00:16:55.304 slat (usec): min=3, max=396, avg= 6.08, stdev= 3.32 00:16:55.304 clat (usec): min=369, max=81405, avg=10996.39, stdev=19403.82 00:16:55.304 lat (usec): min=378, max=81411, avg=11002.47, stdev=19403.83 00:16:55.304 clat percentiles (usec): 00:16:55.304 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 906], 20.00th=[ 1090], 00:16:55.304 | 30.00th=[ 1270], 40.00th=[ 1876], 50.00th=[ 3818], 60.00th=[ 4948], 00:16:55.304 | 70.00th=[ 5866], 80.00th=[10683], 90.00th=[57410], 95.00th=[63701], 00:16:55.304 | 99.00th=[71828], 99.50th=[74974], 99.90th=[79168], 99.95th=[79168], 00:16:55.304 | 99.99th=[81265] 00:16:55.304 bw ( KiB/s): min= 984, max=52064, per=81.44%, avg=23831.27, stdev=16036.15, samples=22 00:16:55.304 iops : min= 246, max=13016, avg=5957.82, stdev=4009.04, samples=22 00:16:55.304 lat (usec) : 500=0.02%, 750=2.25%, 1000=5.06% 00:16:55.304 lat (msec) : 2=13.46%, 4=5.35%, 10=13.90%, 20=5.32%, 50=47.31% 00:16:55.304 lat (msec) : 100=6.55%, 250=0.77%, 500=0.01% 00:16:55.304 cpu : usr=99.39%, sys=0.13%, ctx=35, majf=0, minf=5571 00:16:55.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:55.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.304 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.304 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.304 second_half: (groupid=0, jobs=1): err= 0: pid=72643: Fri Oct 25 17:58:10 2024 00:16:55.305 read: IOPS=2887, BW=11.3MiB/s (11.8MB/s)(255MiB/22587msec) 00:16:55.305 slat (nsec): min=3074, max=18358, avg=3800.78, stdev=710.98 00:16:55.305 clat (usec): min=630, max=279790, avg=32306.33, stdev=14388.22 00:16:55.305 lat (usec): min=634, max=279795, avg=32310.14, stdev=14388.26 00:16:55.305 clat percentiles (msec): 00:16:55.305 | 1.00th=[ 6], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 30], 00:16:55.305 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:55.305 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 39], 00:16:55.305 | 99.00th=[ 110], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 186], 00:16:55.305 | 99.99th=[ 275] 00:16:55.305 write: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(256MiB/17917msec); 0 zone resets 00:16:55.305 slat (usec): min=3, max=549, avg= 5.61, stdev= 3.18 00:16:55.305 clat (usec): min=377, max=82818, avg=11951.42, stdev=19727.11 00:16:55.305 lat (usec): min=386, max=82823, avg=11957.03, stdev=19727.16 00:16:55.305 clat percentiles (usec): 00:16:55.305 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 799], 20.00th=[ 1029], 00:16:55.305 | 30.00th=[ 1254], 40.00th=[ 2868], 50.00th=[ 4293], 60.00th=[ 5407], 00:16:55.305 | 70.00th=[ 8586], 80.00th=[11731], 90.00th=[57410], 95.00th=[64226], 00:16:55.305 | 99.00th=[73925], 99.50th=[76022], 99.90th=[80217], 99.95th=[81265], 00:16:55.305 | 99.99th=[81265] 00:16:55.305 bw ( KiB/s): min= 928, max=40888, per=77.90%, avg=22795.13, stdev=12482.24, samples=23 00:16:55.305 iops : min= 232, max=10222, avg=5698.78, stdev=3120.56, samples=23 00:16:55.305 lat (usec) : 500=0.04%, 750=3.43%, 1000=5.99% 00:16:55.305 lat (msec) : 2=8.99%, 4=5.65%, 10=14.92%, 20=6.11%, 50=47.57% 00:16:55.305 lat (msec) : 100=6.71%, 250=0.59%, 500=0.01% 00:16:55.305 cpu : usr=99.28%, sys=0.11%, ctx=32, majf=0, minf=5530 00:16:55.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:55.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.305 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.305 issued rwts: total=65212,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.305 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.305 00:16:55.305 Run status group 0 (all jobs): 00:16:55.305 READ: bw=22.6MiB/s (23.6MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.9MB/s), io=509MiB (534MB), run=22461-22587msec 00:16:55.305 WRITE: bw=28.6MiB/s (30.0MB/s), 14.3MiB/s-15.9MiB/s (15.0MB/s-16.7MB/s), io=512MiB (537MB), run=16069-17917msec 00:16:55.305 ----------------------------------------------------- 00:16:55.305 Suppressions used: 00:16:55.305 count bytes template 00:16:55.305 2 10 /usr/src/fio/parse.c 00:16:55.305 1 96 /usr/src/fio/iolog.c 00:16:55.305 1 8 libtcmalloc_minimal.so 00:16:55.305 1 904 libcrypto.so 00:16:55.305 ----------------------------------------------------- 00:16:55.305 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:55.305 17:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:55.305 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:55.305 fio-3.35 00:16:55.305 Starting 1 thread 00:17:07.497 00:17:07.497 test: (groupid=0, jobs=1): err= 0: pid=72940: Fri Oct 25 17:58:25 2024 00:17:07.497 read: IOPS=8145, BW=31.8MiB/s (33.4MB/s)(255MiB/8005msec) 00:17:07.497 slat (nsec): min=2986, max=19921, avg=3461.51, stdev=662.52 00:17:07.497 clat (usec): min=486, max=30702, avg=15707.83, stdev=1470.91 00:17:07.497 lat (usec): min=493, max=30705, avg=15711.29, stdev=1470.91 00:17:07.497 clat percentiles (usec): 00:17:07.497 | 1.00th=[14615], 5.00th=[14746], 10.00th=[14877], 20.00th=[15008], 00:17:07.497 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:17:07.497 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16581], 95.00th=[19006], 00:17:07.497 | 99.00th=[22152], 99.50th=[23200], 99.90th=[24511], 99.95th=[26870], 00:17:07.497 | 99.99th=[30016] 00:17:07.497 write: IOPS=16.0k, BW=62.6MiB/s (65.6MB/s)(256MiB/4092msec); 0 zone resets 00:17:07.497 slat (usec): min=4, max=378, avg= 6.20, stdev= 2.68 00:17:07.497 clat (usec): min=492, max=47525, avg=7950.04, stdev=9820.46 00:17:07.497 lat (usec): min=496, max=47531, avg=7956.23, stdev=9820.43 00:17:07.497 clat percentiles (usec): 00:17:07.497 | 1.00th=[ 627], 5.00th=[ 725], 10.00th=[ 824], 20.00th=[ 979], 00:17:07.497 | 30.00th=[ 1123], 40.00th=[ 1549], 50.00th=[ 5276], 60.00th=[ 6063], 00:17:07.497 | 70.00th=[ 7242], 80.00th=[ 8848], 90.00th=[28443], 95.00th=[30016], 00:17:07.497 | 99.00th=[33817], 99.50th=[35914], 99.90th=[40109], 99.95th=[41681], 00:17:07.497 | 99.99th=[46400] 00:17:07.497 bw ( KiB/s): min= 9216, max=88064, per=90.91%, avg=58241.67, stdev=21578.66, samples=9 00:17:07.497 iops : min= 2304, max=22016, avg=14560.33, stdev=5394.67, samples=9 00:17:07.497 lat (usec) : 500=0.01%, 750=3.14%, 1000=7.84% 00:17:07.497 lat (msec) : 2=9.62%, 4=0.61%, 10=20.02%, 20=49.02%, 50=9.75% 00:17:07.497 cpu : usr=99.15%, sys=0.17%, ctx=19, majf=0, minf=5566 00:17:07.497 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:07.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.497 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:07.497 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.497 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:07.497 00:17:07.497 Run status group 0 (all jobs): 00:17:07.497 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=255MiB (267MB), run=8005-8005msec 00:17:07.497 WRITE: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=256MiB (268MB), run=4092-4092msec 00:17:09.404 ----------------------------------------------------- 00:17:09.404 Suppressions used: 00:17:09.404 count bytes template 00:17:09.404 1 5 /usr/src/fio/parse.c 00:17:09.404 2 192 /usr/src/fio/iolog.c 00:17:09.404 1 8 libtcmalloc_minimal.so 00:17:09.404 1 904 libcrypto.so 00:17:09.404 ----------------------------------------------------- 00:17:09.404 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:09.404 Remove shared memory files 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57077 /dev/shm/spdk_tgt_trace.pid71259 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:09.404 ************************************ 00:17:09.404 END TEST ftl_fio_basic 00:17:09.404 ************************************ 00:17:09.404 00:17:09.404 real 0m57.881s 00:17:09.404 user 2m6.257s 00:17:09.404 sys 0m2.538s 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.404 17:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:09.404 17:58:27 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:09.404 17:58:27 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:09.404 17:58:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.404 17:58:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:09.404 ************************************ 00:17:09.404 START TEST ftl_bdevperf 00:17:09.404 ************************************ 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:09.404 * Looking for test storage... 00:17:09.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:09.404 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:09.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.405 --rc genhtml_branch_coverage=1 00:17:09.405 --rc genhtml_function_coverage=1 00:17:09.405 --rc genhtml_legend=1 00:17:09.405 --rc geninfo_all_blocks=1 00:17:09.405 --rc geninfo_unexecuted_blocks=1 00:17:09.405 00:17:09.405 ' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:09.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.405 --rc genhtml_branch_coverage=1 00:17:09.405 --rc genhtml_function_coverage=1 00:17:09.405 --rc genhtml_legend=1 00:17:09.405 --rc geninfo_all_blocks=1 00:17:09.405 --rc geninfo_unexecuted_blocks=1 00:17:09.405 00:17:09.405 ' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:09.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.405 --rc genhtml_branch_coverage=1 00:17:09.405 --rc genhtml_function_coverage=1 00:17:09.405 --rc genhtml_legend=1 00:17:09.405 --rc geninfo_all_blocks=1 00:17:09.405 --rc geninfo_unexecuted_blocks=1 00:17:09.405 00:17:09.405 ' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:09.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.405 --rc genhtml_branch_coverage=1 00:17:09.405 --rc genhtml_function_coverage=1 00:17:09.405 --rc genhtml_legend=1 00:17:09.405 --rc geninfo_all_blocks=1 00:17:09.405 --rc geninfo_unexecuted_blocks=1 00:17:09.405 00:17:09.405 ' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73174 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73174 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73174 ']' 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.405 17:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:09.668 [2024-10-25 17:58:27.841321] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:09.668 [2024-10-25 17:58:27.841579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73174 ] 00:17:09.668 [2024-10-25 17:58:27.996585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.668 [2024-10-25 17:58:28.098171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.612 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.612 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:10.612 17:58:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:10.613 17:58:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:10.872 { 00:17:10.872 "name": "nvme0n1", 00:17:10.872 "aliases": [ 00:17:10.872 "a746f842-fe64-422e-8b41-efa4ce250b19" 00:17:10.872 ], 00:17:10.872 "product_name": "NVMe disk", 00:17:10.872 "block_size": 4096, 00:17:10.872 "num_blocks": 1310720, 00:17:10.872 "uuid": "a746f842-fe64-422e-8b41-efa4ce250b19", 00:17:10.872 "numa_id": -1, 00:17:10.872 "assigned_rate_limits": { 00:17:10.872 "rw_ios_per_sec": 0, 00:17:10.872 "rw_mbytes_per_sec": 0, 00:17:10.872 "r_mbytes_per_sec": 0, 00:17:10.872 "w_mbytes_per_sec": 0 00:17:10.872 }, 00:17:10.872 "claimed": true, 00:17:10.872 "claim_type": "read_many_write_one", 00:17:10.872 "zoned": false, 00:17:10.872 "supported_io_types": { 00:17:10.872 "read": true, 00:17:10.872 "write": true, 00:17:10.872 "unmap": true, 00:17:10.872 "flush": true, 00:17:10.872 "reset": true, 00:17:10.872 "nvme_admin": true, 00:17:10.872 "nvme_io": true, 00:17:10.872 "nvme_io_md": false, 00:17:10.872 "write_zeroes": true, 00:17:10.872 "zcopy": false, 00:17:10.872 "get_zone_info": false, 00:17:10.872 "zone_management": false, 00:17:10.872 "zone_append": false, 00:17:10.872 "compare": true, 00:17:10.872 "compare_and_write": false, 00:17:10.872 "abort": true, 00:17:10.872 "seek_hole": false, 00:17:10.872 "seek_data": false, 00:17:10.872 "copy": true, 00:17:10.872 "nvme_iov_md": false 00:17:10.872 }, 00:17:10.872 "driver_specific": { 00:17:10.872 "nvme": [ 00:17:10.872 { 00:17:10.872 "pci_address": "0000:00:11.0", 00:17:10.872 "trid": { 00:17:10.872 "trtype": "PCIe", 00:17:10.872 "traddr": "0000:00:11.0" 00:17:10.872 }, 00:17:10.872 "ctrlr_data": { 00:17:10.872 "cntlid": 0, 00:17:10.872 "vendor_id": "0x1b36", 00:17:10.872 "model_number": "QEMU NVMe Ctrl", 00:17:10.872 "serial_number": "12341", 00:17:10.872 "firmware_revision": "8.0.0", 00:17:10.872 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:10.872 "oacs": { 00:17:10.872 "security": 0, 00:17:10.872 "format": 1, 00:17:10.872 "firmware": 0, 00:17:10.872 "ns_manage": 1 00:17:10.872 }, 00:17:10.872 "multi_ctrlr": false, 00:17:10.872 "ana_reporting": false 00:17:10.872 }, 00:17:10.872 "vs": { 00:17:10.872 "nvme_version": "1.4" 00:17:10.872 }, 00:17:10.872 "ns_data": { 00:17:10.872 "id": 1, 00:17:10.872 "can_share": false 00:17:10.872 } 00:17:10.872 } 00:17:10.872 ], 00:17:10.872 "mp_policy": "active_passive" 00:17:10.872 } 00:17:10.872 } 00:17:10.872 ]' 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:10.872 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:11.133 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e056a435-a50c-4954-aebe-0bcc64e9d97a 00:17:11.133 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:11.133 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e056a435-a50c-4954-aebe-0bcc64e9d97a 00:17:11.393 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:11.652 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=fded16ec-d000-466b-aaff-6ea97b13be79 00:17:11.652 17:58:29 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fded16ec-d000-466b-aaff-6ea97b13be79 00:17:11.652 17:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fa94095e-12da-4d21-9669-680803d24d99 00:17:11.652 17:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fa94095e-12da-4d21-9669-680803d24d99 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fa94095e-12da-4d21-9669-680803d24d99 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fa94095e-12da-4d21-9669-680803d24d99 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fa94095e-12da-4d21-9669-680803d24d99 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:11.653 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fa94095e-12da-4d21-9669-680803d24d99 00:17:11.912 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:11.912 { 00:17:11.912 "name": "fa94095e-12da-4d21-9669-680803d24d99", 00:17:11.912 "aliases": [ 00:17:11.912 "lvs/nvme0n1p0" 00:17:11.912 ], 00:17:11.912 "product_name": "Logical Volume", 00:17:11.912 "block_size": 4096, 00:17:11.912 "num_blocks": 26476544, 00:17:11.912 "uuid": "fa94095e-12da-4d21-9669-680803d24d99", 00:17:11.912 "assigned_rate_limits": { 00:17:11.912 "rw_ios_per_sec": 0, 00:17:11.912 "rw_mbytes_per_sec": 0, 00:17:11.912 "r_mbytes_per_sec": 0, 00:17:11.912 "w_mbytes_per_sec": 0 00:17:11.912 }, 00:17:11.912 "claimed": false, 00:17:11.912 "zoned": false, 00:17:11.912 "supported_io_types": { 00:17:11.912 "read": true, 00:17:11.912 "write": true, 00:17:11.912 "unmap": true, 00:17:11.912 "flush": false, 00:17:11.912 "reset": true, 00:17:11.912 "nvme_admin": false, 00:17:11.912 "nvme_io": false, 00:17:11.912 "nvme_io_md": false, 00:17:11.912 "write_zeroes": true, 00:17:11.912 "zcopy": false, 00:17:11.912 "get_zone_info": false, 00:17:11.912 "zone_management": false, 00:17:11.912 "zone_append": false, 00:17:11.912 "compare": false, 00:17:11.912 "compare_and_write": false, 00:17:11.912 "abort": false, 00:17:11.912 "seek_hole": true, 00:17:11.912 "seek_data": true, 00:17:11.912 "copy": false, 00:17:11.912 "nvme_iov_md": false 00:17:11.913 }, 00:17:11.913 "driver_specific": { 00:17:11.913 "lvol": { 00:17:11.913 "lvol_store_uuid": "fded16ec-d000-466b-aaff-6ea97b13be79", 00:17:11.913 "base_bdev": "nvme0n1", 00:17:11.913 "thin_provision": true, 00:17:11.913 "num_allocated_clusters": 0, 00:17:11.913 "snapshot": false, 00:17:11.913 "clone": false, 00:17:11.913 "esnap_clone": false 00:17:11.913 } 00:17:11.913 } 00:17:11.913 } 00:17:11.913 ]' 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:11.913 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fa94095e-12da-4d21-9669-680803d24d99 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fa94095e-12da-4d21-9669-680803d24d99 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:12.196 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fa94095e-12da-4d21-9669-680803d24d99 00:17:12.458 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:12.458 { 00:17:12.459 "name": "fa94095e-12da-4d21-9669-680803d24d99", 00:17:12.459 "aliases": [ 00:17:12.459 "lvs/nvme0n1p0" 00:17:12.459 ], 00:17:12.459 "product_name": "Logical Volume", 00:17:12.459 "block_size": 4096, 00:17:12.459 "num_blocks": 26476544, 00:17:12.459 "uuid": "fa94095e-12da-4d21-9669-680803d24d99", 00:17:12.459 "assigned_rate_limits": { 00:17:12.459 "rw_ios_per_sec": 0, 00:17:12.459 "rw_mbytes_per_sec": 0, 00:17:12.459 "r_mbytes_per_sec": 0, 00:17:12.459 "w_mbytes_per_sec": 0 00:17:12.459 }, 00:17:12.459 "claimed": false, 00:17:12.459 "zoned": false, 00:17:12.459 "supported_io_types": { 00:17:12.459 "read": true, 00:17:12.459 "write": true, 00:17:12.459 "unmap": true, 00:17:12.459 "flush": false, 00:17:12.459 "reset": true, 00:17:12.459 "nvme_admin": false, 00:17:12.459 "nvme_io": false, 00:17:12.459 "nvme_io_md": false, 00:17:12.459 "write_zeroes": true, 00:17:12.459 "zcopy": false, 00:17:12.459 "get_zone_info": false, 00:17:12.459 "zone_management": false, 00:17:12.459 "zone_append": false, 00:17:12.459 "compare": false, 00:17:12.459 "compare_and_write": false, 00:17:12.459 "abort": false, 00:17:12.459 "seek_hole": true, 00:17:12.459 "seek_data": true, 00:17:12.459 "copy": false, 00:17:12.459 "nvme_iov_md": false 00:17:12.459 }, 00:17:12.459 "driver_specific": { 00:17:12.459 "lvol": { 00:17:12.459 "lvol_store_uuid": "fded16ec-d000-466b-aaff-6ea97b13be79", 00:17:12.459 "base_bdev": "nvme0n1", 00:17:12.459 "thin_provision": true, 00:17:12.459 "num_allocated_clusters": 0, 00:17:12.459 "snapshot": false, 00:17:12.459 "clone": false, 00:17:12.459 "esnap_clone": false 00:17:12.459 } 00:17:12.459 } 00:17:12.459 } 00:17:12.459 ]' 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:12.459 17:58:30 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fa94095e-12da-4d21-9669-680803d24d99 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fa94095e-12da-4d21-9669-680803d24d99 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:12.717 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fa94095e-12da-4d21-9669-680803d24d99 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:12.976 { 00:17:12.976 "name": "fa94095e-12da-4d21-9669-680803d24d99", 00:17:12.976 "aliases": [ 00:17:12.976 "lvs/nvme0n1p0" 00:17:12.976 ], 00:17:12.976 "product_name": "Logical Volume", 00:17:12.976 "block_size": 4096, 00:17:12.976 "num_blocks": 26476544, 00:17:12.976 "uuid": "fa94095e-12da-4d21-9669-680803d24d99", 00:17:12.976 "assigned_rate_limits": { 00:17:12.976 "rw_ios_per_sec": 0, 00:17:12.976 "rw_mbytes_per_sec": 0, 00:17:12.976 "r_mbytes_per_sec": 0, 00:17:12.976 "w_mbytes_per_sec": 0 00:17:12.976 }, 00:17:12.976 "claimed": false, 00:17:12.976 "zoned": false, 00:17:12.976 "supported_io_types": { 00:17:12.976 "read": true, 00:17:12.976 "write": true, 00:17:12.976 "unmap": true, 00:17:12.976 "flush": false, 00:17:12.976 "reset": true, 00:17:12.976 "nvme_admin": false, 00:17:12.976 "nvme_io": false, 00:17:12.976 "nvme_io_md": false, 00:17:12.976 "write_zeroes": true, 00:17:12.976 "zcopy": false, 00:17:12.976 "get_zone_info": false, 00:17:12.976 "zone_management": false, 00:17:12.976 "zone_append": false, 00:17:12.976 "compare": false, 00:17:12.976 "compare_and_write": false, 00:17:12.976 "abort": false, 00:17:12.976 "seek_hole": true, 00:17:12.976 "seek_data": true, 00:17:12.976 "copy": false, 00:17:12.976 "nvme_iov_md": false 00:17:12.976 }, 00:17:12.976 "driver_specific": { 00:17:12.976 "lvol": { 00:17:12.976 "lvol_store_uuid": "fded16ec-d000-466b-aaff-6ea97b13be79", 00:17:12.976 "base_bdev": "nvme0n1", 00:17:12.976 "thin_provision": true, 00:17:12.976 "num_allocated_clusters": 0, 00:17:12.976 "snapshot": false, 00:17:12.976 "clone": false, 00:17:12.976 "esnap_clone": false 00:17:12.976 } 00:17:12.976 } 00:17:12.976 } 00:17:12.976 ]' 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:12.976 17:58:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fa94095e-12da-4d21-9669-680803d24d99 -c nvc0n1p0 --l2p_dram_limit 20 00:17:13.235 [2024-10-25 17:58:31.478165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.478210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.235 [2024-10-25 17:58:31.478221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:13.235 [2024-10-25 17:58:31.478229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.478278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.478287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.235 [2024-10-25 17:58:31.478294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:13.235 [2024-10-25 17:58:31.478304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.478317] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.235 [2024-10-25 17:58:31.478911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.235 [2024-10-25 17:58:31.478925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.478934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.235 [2024-10-25 17:58:31.478941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:17:13.235 [2024-10-25 17:58:31.478948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.478972] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e5dd03cb-6486-4ff6-aef3-213c3cef0ed1 00:17:13.235 [2024-10-25 17:58:31.479946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.480060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:13.235 [2024-10-25 17:58:31.480077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:13.235 [2024-10-25 17:58:31.480086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.484848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.484874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.235 [2024-10-25 17:58:31.484884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:17:13.235 [2024-10-25 17:58:31.484890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.484959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.484966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.235 [2024-10-25 17:58:31.484979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:13.235 [2024-10-25 17:58:31.484985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.485018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.485026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:13.235 [2024-10-25 17:58:31.485034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:13.235 [2024-10-25 17:58:31.485040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.485057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.235 [2024-10-25 17:58:31.488074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.488100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.235 [2024-10-25 17:58:31.488108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:17:13.235 [2024-10-25 17:58:31.488116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.488140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.235 [2024-10-25 17:58:31.488149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:13.235 [2024-10-25 17:58:31.488156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:13.235 [2024-10-25 17:58:31.488163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.235 [2024-10-25 17:58:31.488182] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:13.235 [2024-10-25 17:58:31.488290] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:13.235 [2024-10-25 17:58:31.488301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:13.235 [2024-10-25 17:58:31.488311] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:13.235 [2024-10-25 17:58:31.488319] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:13.235 [2024-10-25 17:58:31.488327] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:13.235 [2024-10-25 17:58:31.488334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:13.235 [2024-10-25 17:58:31.488341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:13.235 [2024-10-25 17:58:31.488347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:13.235 [2024-10-25 17:58:31.488353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:13.235 [2024-10-25 17:58:31.488360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.236 [2024-10-25 17:58:31.488367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:13.236 [2024-10-25 17:58:31.488373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:17:13.236 [2024-10-25 17:58:31.488381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.236 [2024-10-25 17:58:31.488450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.236 [2024-10-25 17:58:31.488459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:13.236 [2024-10-25 17:58:31.488465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:13.236 [2024-10-25 17:58:31.488473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.236 [2024-10-25 17:58:31.488544] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:13.236 [2024-10-25 17:58:31.488552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:13.236 [2024-10-25 17:58:31.488579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:13.236 [2024-10-25 17:58:31.488601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:13.236 [2024-10-25 17:58:31.488618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.236 [2024-10-25 17:58:31.488630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:13.236 [2024-10-25 17:58:31.488637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:13.236 [2024-10-25 17:58:31.488642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.236 [2024-10-25 17:58:31.488654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:13.236 [2024-10-25 17:58:31.488659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:13.236 [2024-10-25 17:58:31.488669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:13.236 [2024-10-25 17:58:31.488681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:13.236 [2024-10-25 17:58:31.488699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:13.236 [2024-10-25 17:58:31.488717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:13.236 [2024-10-25 17:58:31.488733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:13.236 [2024-10-25 17:58:31.488751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:13.236 [2024-10-25 17:58:31.488769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.236 [2024-10-25 17:58:31.488780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:13.236 [2024-10-25 17:58:31.488787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:13.236 [2024-10-25 17:58:31.488792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.236 [2024-10-25 17:58:31.488798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:13.236 [2024-10-25 17:58:31.488803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:13.236 [2024-10-25 17:58:31.488809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:13.236 [2024-10-25 17:58:31.488821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:13.236 [2024-10-25 17:58:31.488826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488832] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:13.236 [2024-10-25 17:58:31.488838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:13.236 [2024-10-25 17:58:31.488845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.236 [2024-10-25 17:58:31.488860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:13.236 [2024-10-25 17:58:31.488866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:13.236 [2024-10-25 17:58:31.488872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:13.236 [2024-10-25 17:58:31.488878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:13.236 [2024-10-25 17:58:31.488884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:13.236 [2024-10-25 17:58:31.488889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:13.236 [2024-10-25 17:58:31.488898] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:13.236 [2024-10-25 17:58:31.488906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.488914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:13.236 [2024-10-25 17:58:31.488920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:13.236 [2024-10-25 17:58:31.488927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:13.236 [2024-10-25 17:58:31.488933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:13.236 [2024-10-25 17:58:31.488940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:13.236 [2024-10-25 17:58:31.488945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:13.236 [2024-10-25 17:58:31.488952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:13.236 [2024-10-25 17:58:31.488957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:13.236 [2024-10-25 17:58:31.488966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:13.236 [2024-10-25 17:58:31.488971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.488977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.488983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.488990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.488997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:13.236 [2024-10-25 17:58:31.489004] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:13.236 [2024-10-25 17:58:31.489010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.489018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:13.236 [2024-10-25 17:58:31.489025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:13.236 [2024-10-25 17:58:31.489032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:13.236 [2024-10-25 17:58:31.489037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:13.236 [2024-10-25 17:58:31.489045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.236 [2024-10-25 17:58:31.489051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:13.236 [2024-10-25 17:58:31.489060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:13.236 [2024-10-25 17:58:31.489065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.237 [2024-10-25 17:58:31.489105] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:13.237 [2024-10-25 17:58:31.489113] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:15.766 [2024-10-25 17:58:33.853805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.853864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:15.766 [2024-10-25 17:58:33.853881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2364.684 ms 00:17:15.766 [2024-10-25 17:58:33.853892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.879152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.879332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.766 [2024-10-25 17:58:33.879356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.054 ms 00:17:15.766 [2024-10-25 17:58:33.879365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.879517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.879527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.766 [2024-10-25 17:58:33.879540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:15.766 [2024-10-25 17:58:33.879547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.924793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.924839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.766 [2024-10-25 17:58:33.924855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.198 ms 00:17:15.766 [2024-10-25 17:58:33.924863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.924905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.924915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.766 [2024-10-25 17:58:33.924925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:15.766 [2024-10-25 17:58:33.924934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.925291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.925308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.766 [2024-10-25 17:58:33.925319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:15.766 [2024-10-25 17:58:33.925327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.925460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.925474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.766 [2024-10-25 17:58:33.925486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:15.766 [2024-10-25 17:58:33.925494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.938384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.938415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.766 [2024-10-25 17:58:33.938426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.873 ms 00:17:15.766 [2024-10-25 17:58:33.938434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:33.949709] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:15.766 [2024-10-25 17:58:33.954745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:33.954892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.766 [2024-10-25 17:58:33.954908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.241 ms 00:17:15.766 [2024-10-25 17:58:33.954918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.016510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.016584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:15.766 [2024-10-25 17:58:34.016599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.566 ms 00:17:15.766 [2024-10-25 17:58:34.016610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.016783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.016798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.766 [2024-10-25 17:58:34.016807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:15.766 [2024-10-25 17:58:34.016816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.039882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.039936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:15.766 [2024-10-25 17:58:34.039949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.024 ms 00:17:15.766 [2024-10-25 17:58:34.039960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.062570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.062618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:15.766 [2024-10-25 17:58:34.062632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.569 ms 00:17:15.766 [2024-10-25 17:58:34.062641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.063210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.063233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.766 [2024-10-25 17:58:34.063242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:17:15.766 [2024-10-25 17:58:34.063251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.132635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.132703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:15.766 [2024-10-25 17:58:34.132718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.332 ms 00:17:15.766 [2024-10-25 17:58:34.132728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.157052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.157112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:15.766 [2024-10-25 17:58:34.157126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.239 ms 00:17:15.766 [2024-10-25 17:58:34.157136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.766 [2024-10-25 17:58:34.181076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.766 [2024-10-25 17:58:34.181306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:15.766 [2024-10-25 17:58:34.181325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.893 ms 00:17:15.766 [2024-10-25 17:58:34.181335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.025 [2024-10-25 17:58:34.204498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.025 [2024-10-25 17:58:34.204706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:16.025 [2024-10-25 17:58:34.204723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.123 ms 00:17:16.025 [2024-10-25 17:58:34.204733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.025 [2024-10-25 17:58:34.204774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.025 [2024-10-25 17:58:34.204791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:16.025 [2024-10-25 17:58:34.204799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:16.025 [2024-10-25 17:58:34.204808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.025 [2024-10-25 17:58:34.204891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.025 [2024-10-25 17:58:34.204902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:16.025 [2024-10-25 17:58:34.204910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:16.025 [2024-10-25 17:58:34.204919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.025 [2024-10-25 17:58:34.205800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2727.192 ms, result 0 00:17:16.025 { 00:17:16.025 "name": "ftl0", 00:17:16.025 "uuid": "e5dd03cb-6486-4ff6-aef3-213c3cef0ed1" 00:17:16.025 } 00:17:16.025 17:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:16.025 17:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:16.025 17:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:16.025 17:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:16.283 [2024-10-25 17:58:34.526124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:16.283 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:16.283 Zero copy mechanism will not be used. 00:17:16.283 Running I/O for 4 seconds... 00:17:18.148 2707.00 IOPS, 179.76 MiB/s [2024-10-25T17:58:37.956Z] 2869.00 IOPS, 190.52 MiB/s [2024-10-25T17:58:38.894Z] 2873.00 IOPS, 190.79 MiB/s 00:17:20.459 Latency(us) 00:17:20.459 [2024-10-25T17:58:38.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.459 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:20.459 ftl0 : 4.00 2861.40 190.01 0.00 0.00 367.47 174.87 127442.31 00:17:20.459 [2024-10-25T17:58:38.894Z] =================================================================================================================== 00:17:20.459 [2024-10-25T17:58:38.894Z] Total : 2861.40 190.01 0.00 0.00 367.47 174.87 127442.31 00:17:20.459 { 00:17:20.459 "results": [ 00:17:20.459 { 00:17:20.459 "job": "ftl0", 00:17:20.459 "core_mask": "0x1", 00:17:20.459 "workload": "randwrite", 00:17:20.459 "status": "finished", 00:17:20.459 "queue_depth": 1, 00:17:20.459 "io_size": 69632, 00:17:20.459 "runtime": 4.000139, 00:17:20.459 "iops": 2861.40056633032, 00:17:20.459 "mibps": 190.0148813578728, 00:17:20.459 "io_failed": 0, 00:17:20.459 "io_timeout": 0, 00:17:20.459 "avg_latency_us": 367.47422089006574, 00:17:20.459 "min_latency_us": 174.8676923076923, 00:17:20.459 "max_latency_us": 127442.31384615385 00:17:20.459 } 00:17:20.459 ], 00:17:20.459 "core_count": 1 00:17:20.459 } 00:17:20.459 [2024-10-25 17:58:38.535103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:20.459 17:58:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:20.459 [2024-10-25 17:58:38.635396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:20.459 Running I/O for 4 seconds... 00:17:22.331 6422.00 IOPS, 25.09 MiB/s [2024-10-25T17:58:41.700Z] 6966.50 IOPS, 27.21 MiB/s [2024-10-25T17:58:43.095Z] 8075.67 IOPS, 31.55 MiB/s [2024-10-25T17:58:43.095Z] 7786.00 IOPS, 30.41 MiB/s 00:17:24.660 Latency(us) 00:17:24.660 [2024-10-25T17:58:43.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.660 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.660 ftl0 : 4.03 7764.33 30.33 0.00 0.00 16429.34 236.31 49807.36 00:17:24.660 [2024-10-25T17:58:43.095Z] =================================================================================================================== 00:17:24.660 [2024-10-25T17:58:43.095Z] Total : 7764.33 30.33 0.00 0.00 16429.34 0.00 49807.36 00:17:24.660 [2024-10-25 17:58:42.673032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:24.660 { 00:17:24.660 "results": [ 00:17:24.660 { 00:17:24.660 "job": "ftl0", 00:17:24.660 "core_mask": "0x1", 00:17:24.660 "workload": "randwrite", 00:17:24.660 "status": "finished", 00:17:24.660 "queue_depth": 128, 00:17:24.660 "io_size": 4096, 00:17:24.660 "runtime": 4.027651, 00:17:24.660 "iops": 7764.327147511043, 00:17:24.660 "mibps": 30.329402919965013, 00:17:24.660 "io_failed": 0, 00:17:24.660 "io_timeout": 0, 00:17:24.660 "avg_latency_us": 16429.33905740205, 00:17:24.660 "min_latency_us": 236.30769230769232, 00:17:24.660 "max_latency_us": 49807.36 00:17:24.660 } 00:17:24.660 ], 00:17:24.660 "core_count": 1 00:17:24.660 } 00:17:24.660 17:58:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:24.660 [2024-10-25 17:58:42.779798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:24.660 Running I/O for 4 seconds... 00:17:26.564 6432.00 IOPS, 25.12 MiB/s [2024-10-25T17:58:45.932Z] 7190.50 IOPS, 28.09 MiB/s [2024-10-25T17:58:46.865Z] 7628.33 IOPS, 29.80 MiB/s [2024-10-25T17:58:46.865Z] 8421.25 IOPS, 32.90 MiB/s 00:17:28.430 Latency(us) 00:17:28.430 [2024-10-25T17:58:46.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.430 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:28.430 Verification LBA range: start 0x0 length 0x1400000 00:17:28.430 ftl0 : 4.01 8437.69 32.96 0.00 0.00 15129.34 217.40 36498.51 00:17:28.430 [2024-10-25T17:58:46.865Z] =================================================================================================================== 00:17:28.430 [2024-10-25T17:58:46.865Z] Total : 8437.69 32.96 0.00 0.00 15129.34 0.00 36498.51 00:17:28.430 { 00:17:28.430 "results": [ 00:17:28.430 { 00:17:28.430 "job": "ftl0", 00:17:28.430 "core_mask": "0x1", 00:17:28.430 "workload": "verify", 00:17:28.430 "status": "finished", 00:17:28.430 "verify_range": { 00:17:28.430 "start": 0, 00:17:28.430 "length": 20971520 00:17:28.430 }, 00:17:28.430 "queue_depth": 128, 00:17:28.430 "io_size": 4096, 00:17:28.430 "runtime": 4.007257, 00:17:28.430 "iops": 8437.691917438786, 00:17:28.430 "mibps": 32.95973405249526, 00:17:28.430 "io_failed": 0, 00:17:28.430 "io_timeout": 0, 00:17:28.430 "avg_latency_us": 15129.343718115553, 00:17:28.430 "min_latency_us": 217.40307692307692, 00:17:28.430 "max_latency_us": 36498.51076923077 00:17:28.430 } 00:17:28.430 ], 00:17:28.430 "core_count": 1 00:17:28.430 } 00:17:28.430 [2024-10-25 17:58:46.803601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:28.430 17:58:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:28.688 [2024-10-25 17:58:46.999683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.688 [2024-10-25 17:58:46.999737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.688 [2024-10-25 17:58:46.999749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.688 [2024-10-25 17:58:46.999758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.688 [2024-10-25 17:58:46.999775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.688 [2024-10-25 17:58:47.001891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.688 [2024-10-25 17:58:47.001919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.688 [2024-10-25 17:58:47.001930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:17:28.688 [2024-10-25 17:58:47.001937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.688 [2024-10-25 17:58:47.003760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.688 [2024-10-25 17:58:47.003787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.688 [2024-10-25 17:58:47.003797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:17:28.688 [2024-10-25 17:58:47.003804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.688 [2024-10-25 17:58:47.122582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.122761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.947 [2024-10-25 17:58:47.122783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.753 ms 00:17:28.947 [2024-10-25 17:58:47.122790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.127671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.127694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:28.947 [2024-10-25 17:58:47.127704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:17:28.947 [2024-10-25 17:58:47.127711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.145936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.146052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.947 [2024-10-25 17:58:47.146069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.179 ms 00:17:28.947 [2024-10-25 17:58:47.146075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.157829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.157860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.947 [2024-10-25 17:58:47.157875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.727 ms 00:17:28.947 [2024-10-25 17:58:47.157884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.157985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.157994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.947 [2024-10-25 17:58:47.158004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:28.947 [2024-10-25 17:58:47.158010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.175999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.176025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:28.947 [2024-10-25 17:58:47.176035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.974 ms 00:17:28.947 [2024-10-25 17:58:47.176041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.193455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.193576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:28.947 [2024-10-25 17:58:47.193592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:17:28.947 [2024-10-25 17:58:47.193598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.210762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.210788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.947 [2024-10-25 17:58:47.210799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.138 ms 00:17:28.947 [2024-10-25 17:58:47.210804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.227932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.947 [2024-10-25 17:58:47.227961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.947 [2024-10-25 17:58:47.227981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.072 ms 00:17:28.947 [2024-10-25 17:58:47.227987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.947 [2024-10-25 17:58:47.228015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.947 [2024-10-25 17:58:47.228027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.947 [2024-10-25 17:58:47.228227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.948 [2024-10-25 17:58:47.228699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.948 [2024-10-25 17:58:47.228706] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5dd03cb-6486-4ff6-aef3-213c3cef0ed1 00:17:28.948 [2024-10-25 17:58:47.228713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.948 [2024-10-25 17:58:47.228719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.948 [2024-10-25 17:58:47.228733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.948 [2024-10-25 17:58:47.228740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.948 [2024-10-25 17:58:47.228758] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.948 [2024-10-25 17:58:47.228765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.948 [2024-10-25 17:58:47.228770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.948 [2024-10-25 17:58:47.228778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.948 [2024-10-25 17:58:47.228782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.948 [2024-10-25 17:58:47.228789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.948 [2024-10-25 17:58:47.228795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.948 [2024-10-25 17:58:47.228802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:17:28.948 [2024-10-25 17:58:47.228807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.948 [2024-10-25 17:58:47.238201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.948 [2024-10-25 17:58:47.238226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.948 [2024-10-25 17:58:47.238238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.368 ms 00:17:28.948 [2024-10-25 17:58:47.238244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.948 [2024-10-25 17:58:47.238518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.948 [2024-10-25 17:58:47.238525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.949 [2024-10-25 17:58:47.238533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:28.949 [2024-10-25 17:58:47.238538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.265969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.266004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.949 [2024-10-25 17:58:47.266016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.266022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.266077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.266084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.949 [2024-10-25 17:58:47.266092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.266099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.266158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.266166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.949 [2024-10-25 17:58:47.266175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.266181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.266194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.266200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.949 [2024-10-25 17:58:47.266208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.266213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.325848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.325888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.949 [2024-10-25 17:58:47.325904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.325909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.374650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.374697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.949 [2024-10-25 17:58:47.374708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.374714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.374793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.374802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.949 [2024-10-25 17:58:47.374810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.374818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.374851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.374858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.949 [2024-10-25 17:58:47.374865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.374871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.374941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.374949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.949 [2024-10-25 17:58:47.374959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.374964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.374989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.374997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.949 [2024-10-25 17:58:47.375004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.375011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.375036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.375043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.949 [2024-10-25 17:58:47.375051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.375057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.375091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.949 [2024-10-25 17:58:47.375104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.949 [2024-10-25 17:58:47.375111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.949 [2024-10-25 17:58:47.375117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.949 [2024-10-25 17:58:47.375213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.500 ms, result 0 00:17:28.949 true 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73174 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73174 ']' 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73174 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73174 00:17:29.208 killing process with pid 73174 00:17:29.208 Received shutdown signal, test time was about 4.000000 seconds 00:17:29.208 00:17:29.208 Latency(us) 00:17:29.208 [2024-10-25T17:58:47.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.208 [2024-10-25T17:58:47.643Z] =================================================================================================================== 00:17:29.208 [2024-10-25T17:58:47.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73174' 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73174 00:17:29.208 17:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73174 00:17:30.146 Remove shared memory files 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:30.146 ************************************ 00:17:30.146 END TEST ftl_bdevperf 00:17:30.146 ************************************ 00:17:30.146 00:17:30.146 real 0m20.610s 00:17:30.146 user 0m23.222s 00:17:30.146 sys 0m0.836s 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.146 17:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:30.146 17:58:48 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:30.146 17:58:48 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:30.146 17:58:48 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.146 17:58:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:30.146 ************************************ 00:17:30.146 START TEST ftl_trim 00:17:30.146 ************************************ 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:30.146 * Looking for test storage... 00:17:30.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lcov --version 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.146 17:58:48 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:30.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.146 --rc genhtml_branch_coverage=1 00:17:30.146 --rc genhtml_function_coverage=1 00:17:30.146 --rc genhtml_legend=1 00:17:30.146 --rc geninfo_all_blocks=1 00:17:30.146 --rc geninfo_unexecuted_blocks=1 00:17:30.146 00:17:30.146 ' 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:30.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.146 --rc genhtml_branch_coverage=1 00:17:30.146 --rc genhtml_function_coverage=1 00:17:30.146 --rc genhtml_legend=1 00:17:30.146 --rc geninfo_all_blocks=1 00:17:30.146 --rc geninfo_unexecuted_blocks=1 00:17:30.146 00:17:30.146 ' 00:17:30.146 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:30.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.146 --rc genhtml_branch_coverage=1 00:17:30.146 --rc genhtml_function_coverage=1 00:17:30.146 --rc genhtml_legend=1 00:17:30.147 --rc geninfo_all_blocks=1 00:17:30.147 --rc geninfo_unexecuted_blocks=1 00:17:30.147 00:17:30.147 ' 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:30.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.147 --rc genhtml_branch_coverage=1 00:17:30.147 --rc genhtml_function_coverage=1 00:17:30.147 --rc genhtml_legend=1 00:17:30.147 --rc geninfo_all_blocks=1 00:17:30.147 --rc geninfo_unexecuted_blocks=1 00:17:30.147 00:17:30.147 ' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73509 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73509 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73509 ']' 00:17:30.147 17:58:48 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.147 17:58:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:30.147 [2024-10-25 17:58:48.549897] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:30.147 [2024-10-25 17:58:48.550213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73509 ] 00:17:30.408 [2024-10-25 17:58:48.709782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.408 [2024-10-25 17:58:48.814375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.408 [2024-10-25 17:58:48.814677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.408 [2024-10-25 17:58:48.814792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.994 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.995 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:30.995 17:58:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:30.995 17:58:49 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:30.995 17:58:49 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:30.995 17:58:49 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:30.995 17:58:49 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:30.995 17:58:49 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:31.252 17:58:49 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:31.252 17:58:49 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:31.252 17:58:49 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:31.252 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:31.252 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:31.252 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:31.252 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:31.252 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:31.510 { 00:17:31.510 "name": "nvme0n1", 00:17:31.510 "aliases": [ 00:17:31.510 "33914af0-c655-4f80-b802-9f866bc0db1b" 00:17:31.510 ], 00:17:31.510 "product_name": "NVMe disk", 00:17:31.510 "block_size": 4096, 00:17:31.510 "num_blocks": 1310720, 00:17:31.510 "uuid": "33914af0-c655-4f80-b802-9f866bc0db1b", 00:17:31.510 "numa_id": -1, 00:17:31.510 "assigned_rate_limits": { 00:17:31.510 "rw_ios_per_sec": 0, 00:17:31.510 "rw_mbytes_per_sec": 0, 00:17:31.510 "r_mbytes_per_sec": 0, 00:17:31.510 "w_mbytes_per_sec": 0 00:17:31.510 }, 00:17:31.510 "claimed": true, 00:17:31.510 "claim_type": "read_many_write_one", 00:17:31.510 "zoned": false, 00:17:31.510 "supported_io_types": { 00:17:31.510 "read": true, 00:17:31.510 "write": true, 00:17:31.510 "unmap": true, 00:17:31.510 "flush": true, 00:17:31.510 "reset": true, 00:17:31.510 "nvme_admin": true, 00:17:31.510 "nvme_io": true, 00:17:31.510 "nvme_io_md": false, 00:17:31.510 "write_zeroes": true, 00:17:31.510 "zcopy": false, 00:17:31.510 "get_zone_info": false, 00:17:31.510 "zone_management": false, 00:17:31.510 "zone_append": false, 00:17:31.510 "compare": true, 00:17:31.510 "compare_and_write": false, 00:17:31.510 "abort": true, 00:17:31.510 "seek_hole": false, 00:17:31.510 "seek_data": false, 00:17:31.510 "copy": true, 00:17:31.510 "nvme_iov_md": false 00:17:31.510 }, 00:17:31.510 "driver_specific": { 00:17:31.510 "nvme": [ 00:17:31.510 { 00:17:31.510 "pci_address": "0000:00:11.0", 00:17:31.510 "trid": { 00:17:31.510 "trtype": "PCIe", 00:17:31.510 "traddr": "0000:00:11.0" 00:17:31.510 }, 00:17:31.510 "ctrlr_data": { 00:17:31.510 "cntlid": 0, 00:17:31.510 "vendor_id": "0x1b36", 00:17:31.510 "model_number": "QEMU NVMe Ctrl", 00:17:31.510 "serial_number": "12341", 00:17:31.510 "firmware_revision": "8.0.0", 00:17:31.510 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:31.510 "oacs": { 00:17:31.510 "security": 0, 00:17:31.510 "format": 1, 00:17:31.510 "firmware": 0, 00:17:31.510 "ns_manage": 1 00:17:31.510 }, 00:17:31.510 "multi_ctrlr": false, 00:17:31.510 "ana_reporting": false 00:17:31.510 }, 00:17:31.510 "vs": { 00:17:31.510 "nvme_version": "1.4" 00:17:31.510 }, 00:17:31.510 "ns_data": { 00:17:31.510 "id": 1, 00:17:31.510 "can_share": false 00:17:31.510 } 00:17:31.510 } 00:17:31.510 ], 00:17:31.510 "mp_policy": "active_passive" 00:17:31.510 } 00:17:31.510 } 00:17:31.510 ]' 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:31.510 17:58:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:31.510 17:58:49 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:31.510 17:58:49 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:31.510 17:58:49 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:31.510 17:58:49 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:31.510 17:58:49 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:31.768 17:58:50 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=fded16ec-d000-466b-aaff-6ea97b13be79 00:17:31.768 17:58:50 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:31.768 17:58:50 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fded16ec-d000-466b-aaff-6ea97b13be79 00:17:32.025 17:58:50 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:32.284 17:58:50 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e4759a77-fe86-438a-8bdd-6e445396c7c4 00:17:32.284 17:58:50 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e4759a77-fe86-438a-8bdd-6e445396c7c4 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:32.541 17:58:50 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:32.541 17:58:50 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:32.541 17:58:50 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:32.541 17:58:50 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:32.541 17:58:50 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:32.541 17:58:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.798 { 00:17:32.798 "name": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:32.798 "aliases": [ 00:17:32.798 "lvs/nvme0n1p0" 00:17:32.798 ], 00:17:32.798 "product_name": "Logical Volume", 00:17:32.798 "block_size": 4096, 00:17:32.798 "num_blocks": 26476544, 00:17:32.798 "uuid": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:32.798 "assigned_rate_limits": { 00:17:32.798 "rw_ios_per_sec": 0, 00:17:32.798 "rw_mbytes_per_sec": 0, 00:17:32.798 "r_mbytes_per_sec": 0, 00:17:32.798 "w_mbytes_per_sec": 0 00:17:32.798 }, 00:17:32.798 "claimed": false, 00:17:32.798 "zoned": false, 00:17:32.798 "supported_io_types": { 00:17:32.798 "read": true, 00:17:32.798 "write": true, 00:17:32.798 "unmap": true, 00:17:32.798 "flush": false, 00:17:32.798 "reset": true, 00:17:32.798 "nvme_admin": false, 00:17:32.798 "nvme_io": false, 00:17:32.798 "nvme_io_md": false, 00:17:32.798 "write_zeroes": true, 00:17:32.798 "zcopy": false, 00:17:32.798 "get_zone_info": false, 00:17:32.798 "zone_management": false, 00:17:32.798 "zone_append": false, 00:17:32.798 "compare": false, 00:17:32.798 "compare_and_write": false, 00:17:32.798 "abort": false, 00:17:32.798 "seek_hole": true, 00:17:32.798 "seek_data": true, 00:17:32.798 "copy": false, 00:17:32.798 "nvme_iov_md": false 00:17:32.798 }, 00:17:32.798 "driver_specific": { 00:17:32.798 "lvol": { 00:17:32.798 "lvol_store_uuid": "e4759a77-fe86-438a-8bdd-6e445396c7c4", 00:17:32.798 "base_bdev": "nvme0n1", 00:17:32.798 "thin_provision": true, 00:17:32.798 "num_allocated_clusters": 0, 00:17:32.798 "snapshot": false, 00:17:32.798 "clone": false, 00:17:32.798 "esnap_clone": false 00:17:32.798 } 00:17:32.798 } 00:17:32.798 } 00:17:32.798 ]' 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:32.798 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:32.798 17:58:51 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:32.798 17:58:51 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:32.798 17:58:51 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:33.055 17:58:51 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:33.055 17:58:51 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:33.055 17:58:51 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:33.055 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:33.055 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:33.055 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:33.055 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:33.055 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:33.312 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:33.312 { 00:17:33.312 "name": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:33.312 "aliases": [ 00:17:33.312 "lvs/nvme0n1p0" 00:17:33.312 ], 00:17:33.312 "product_name": "Logical Volume", 00:17:33.312 "block_size": 4096, 00:17:33.312 "num_blocks": 26476544, 00:17:33.312 "uuid": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:33.312 "assigned_rate_limits": { 00:17:33.312 "rw_ios_per_sec": 0, 00:17:33.312 "rw_mbytes_per_sec": 0, 00:17:33.312 "r_mbytes_per_sec": 0, 00:17:33.312 "w_mbytes_per_sec": 0 00:17:33.312 }, 00:17:33.312 "claimed": false, 00:17:33.312 "zoned": false, 00:17:33.312 "supported_io_types": { 00:17:33.312 "read": true, 00:17:33.312 "write": true, 00:17:33.312 "unmap": true, 00:17:33.312 "flush": false, 00:17:33.312 "reset": true, 00:17:33.312 "nvme_admin": false, 00:17:33.312 "nvme_io": false, 00:17:33.312 "nvme_io_md": false, 00:17:33.312 "write_zeroes": true, 00:17:33.312 "zcopy": false, 00:17:33.312 "get_zone_info": false, 00:17:33.312 "zone_management": false, 00:17:33.312 "zone_append": false, 00:17:33.312 "compare": false, 00:17:33.312 "compare_and_write": false, 00:17:33.312 "abort": false, 00:17:33.312 "seek_hole": true, 00:17:33.312 "seek_data": true, 00:17:33.312 "copy": false, 00:17:33.312 "nvme_iov_md": false 00:17:33.312 }, 00:17:33.312 "driver_specific": { 00:17:33.312 "lvol": { 00:17:33.312 "lvol_store_uuid": "e4759a77-fe86-438a-8bdd-6e445396c7c4", 00:17:33.312 "base_bdev": "nvme0n1", 00:17:33.312 "thin_provision": true, 00:17:33.312 "num_allocated_clusters": 0, 00:17:33.312 "snapshot": false, 00:17:33.312 "clone": false, 00:17:33.313 "esnap_clone": false 00:17:33.313 } 00:17:33.313 } 00:17:33.313 } 00:17:33.313 ]' 00:17:33.313 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:33.313 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:33.313 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:33.313 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:33.313 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:33.313 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:33.313 17:58:51 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:33.313 17:58:51 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:33.571 17:58:51 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:33.571 17:58:51 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:33.571 17:58:51 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:33.571 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:33.571 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:33.571 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:33.571 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:33.571 17:58:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2afcdb-feb6-47d7-b2e8-d565b92af853 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:33.829 { 00:17:33.829 "name": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:33.829 "aliases": [ 00:17:33.829 "lvs/nvme0n1p0" 00:17:33.829 ], 00:17:33.829 "product_name": "Logical Volume", 00:17:33.829 "block_size": 4096, 00:17:33.829 "num_blocks": 26476544, 00:17:33.829 "uuid": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:33.829 "assigned_rate_limits": { 00:17:33.829 "rw_ios_per_sec": 0, 00:17:33.829 "rw_mbytes_per_sec": 0, 00:17:33.829 "r_mbytes_per_sec": 0, 00:17:33.829 "w_mbytes_per_sec": 0 00:17:33.829 }, 00:17:33.829 "claimed": false, 00:17:33.829 "zoned": false, 00:17:33.829 "supported_io_types": { 00:17:33.829 "read": true, 00:17:33.829 "write": true, 00:17:33.829 "unmap": true, 00:17:33.829 "flush": false, 00:17:33.829 "reset": true, 00:17:33.829 "nvme_admin": false, 00:17:33.829 "nvme_io": false, 00:17:33.829 "nvme_io_md": false, 00:17:33.829 "write_zeroes": true, 00:17:33.829 "zcopy": false, 00:17:33.829 "get_zone_info": false, 00:17:33.829 "zone_management": false, 00:17:33.829 "zone_append": false, 00:17:33.829 "compare": false, 00:17:33.829 "compare_and_write": false, 00:17:33.829 "abort": false, 00:17:33.829 "seek_hole": true, 00:17:33.829 "seek_data": true, 00:17:33.829 "copy": false, 00:17:33.829 "nvme_iov_md": false 00:17:33.829 }, 00:17:33.829 "driver_specific": { 00:17:33.829 "lvol": { 00:17:33.829 "lvol_store_uuid": "e4759a77-fe86-438a-8bdd-6e445396c7c4", 00:17:33.829 "base_bdev": "nvme0n1", 00:17:33.829 "thin_provision": true, 00:17:33.829 "num_allocated_clusters": 0, 00:17:33.829 "snapshot": false, 00:17:33.829 "clone": false, 00:17:33.829 "esnap_clone": false 00:17:33.829 } 00:17:33.829 } 00:17:33.829 } 00:17:33.829 ]' 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:33.829 17:58:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:33.829 17:58:52 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:33.829 17:58:52 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fb2afcdb-feb6-47d7-b2e8-d565b92af853 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:34.088 [2024-10-25 17:58:52.302202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.302247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.088 [2024-10-25 17:58:52.302269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.088 [2024-10-25 17:58:52.302276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.304542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.304580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.088 [2024-10-25 17:58:52.304592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:17:34.088 [2024-10-25 17:58:52.304599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.304680] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.088 [2024-10-25 17:58:52.305238] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.088 [2024-10-25 17:58:52.305263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.305271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.088 [2024-10-25 17:58:52.305280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:17:34.088 [2024-10-25 17:58:52.305287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.305379] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:17:34.088 [2024-10-25 17:58:52.306328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.306359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:34.088 [2024-10-25 17:58:52.306369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:34.088 [2024-10-25 17:58:52.306377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.311169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.311195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.088 [2024-10-25 17:58:52.311203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.734 ms 00:17:34.088 [2024-10-25 17:58:52.311213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.311323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.311334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.088 [2024-10-25 17:58:52.311340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:34.088 [2024-10-25 17:58:52.311351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.311372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.311381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.088 [2024-10-25 17:58:52.311387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:34.088 [2024-10-25 17:58:52.311394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.311417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.088 [2024-10-25 17:58:52.314402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.088 [2024-10-25 17:58:52.314426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.088 [2024-10-25 17:58:52.314434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:17:34.088 [2024-10-25 17:58:52.314440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.088 [2024-10-25 17:58:52.314478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.089 [2024-10-25 17:58:52.314484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.089 [2024-10-25 17:58:52.314492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:34.089 [2024-10-25 17:58:52.314508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.089 [2024-10-25 17:58:52.314530] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:34.089 [2024-10-25 17:58:52.314648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:34.089 [2024-10-25 17:58:52.314661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.089 [2024-10-25 17:58:52.314670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:34.089 [2024-10-25 17:58:52.314679] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.089 [2024-10-25 17:58:52.314686] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.089 [2024-10-25 17:58:52.314694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.089 [2024-10-25 17:58:52.314699] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.089 [2024-10-25 17:58:52.314706] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:34.089 [2024-10-25 17:58:52.314712] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:34.089 [2024-10-25 17:58:52.314719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.089 [2024-10-25 17:58:52.314726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.089 [2024-10-25 17:58:52.314733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:34.089 [2024-10-25 17:58:52.314739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.089 [2024-10-25 17:58:52.314819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.089 [2024-10-25 17:58:52.314826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.089 [2024-10-25 17:58:52.314834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:34.089 [2024-10-25 17:58:52.314839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.089 [2024-10-25 17:58:52.314932] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.089 [2024-10-25 17:58:52.314943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.089 [2024-10-25 17:58:52.314953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.089 [2024-10-25 17:58:52.314959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.314967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.089 [2024-10-25 17:58:52.314972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.314978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.089 [2024-10-25 17:58:52.314984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.089 [2024-10-25 17:58:52.314990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.089 [2024-10-25 17:58:52.314995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.089 [2024-10-25 17:58:52.315002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.089 [2024-10-25 17:58:52.315007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.089 [2024-10-25 17:58:52.315014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.089 [2024-10-25 17:58:52.315019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.089 [2024-10-25 17:58:52.315026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:34.089 [2024-10-25 17:58:52.315031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.089 [2024-10-25 17:58:52.315044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.089 [2024-10-25 17:58:52.315064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.089 [2024-10-25 17:58:52.315080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.089 [2024-10-25 17:58:52.315098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.089 [2024-10-25 17:58:52.315114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.089 [2024-10-25 17:58:52.315134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.089 [2024-10-25 17:58:52.315145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.089 [2024-10-25 17:58:52.315150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:34.089 [2024-10-25 17:58:52.315156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.089 [2024-10-25 17:58:52.315162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:34.089 [2024-10-25 17:58:52.315168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:34.089 [2024-10-25 17:58:52.315173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:34.089 [2024-10-25 17:58:52.315184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:34.089 [2024-10-25 17:58:52.315190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.089 [2024-10-25 17:58:52.315204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.089 [2024-10-25 17:58:52.315209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.089 [2024-10-25 17:58:52.315222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.089 [2024-10-25 17:58:52.315231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.089 [2024-10-25 17:58:52.315237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.089 [2024-10-25 17:58:52.315243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.089 [2024-10-25 17:58:52.315248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.089 [2024-10-25 17:58:52.315255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.089 [2024-10-25 17:58:52.315263] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.089 [2024-10-25 17:58:52.315271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.089 [2024-10-25 17:58:52.315284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:34.089 [2024-10-25 17:58:52.315290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.089 [2024-10-25 17:58:52.315297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:34.089 [2024-10-25 17:58:52.315303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:34.089 [2024-10-25 17:58:52.315309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:34.089 [2024-10-25 17:58:52.315315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:34.089 [2024-10-25 17:58:52.315321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:34.089 [2024-10-25 17:58:52.315327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:34.089 [2024-10-25 17:58:52.315335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:34.089 [2024-10-25 17:58:52.315366] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.089 [2024-10-25 17:58:52.315373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.089 [2024-10-25 17:58:52.315387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.089 [2024-10-25 17:58:52.315393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.089 [2024-10-25 17:58:52.315399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.089 [2024-10-25 17:58:52.315407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.089 [2024-10-25 17:58:52.315417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.089 [2024-10-25 17:58:52.315423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:17:34.089 [2024-10-25 17:58:52.315430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.089 [2024-10-25 17:58:52.315497] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:34.089 [2024-10-25 17:58:52.315508] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:36.619 [2024-10-25 17:58:54.672289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.619 [2024-10-25 17:58:54.672457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:36.619 [2024-10-25 17:58:54.672480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2356.781 ms 00:17:36.619 [2024-10-25 17:58:54.672492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.619 [2024-10-25 17:58:54.698236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.619 [2024-10-25 17:58:54.698282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.619 [2024-10-25 17:58:54.698296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.476 ms 00:17:36.619 [2024-10-25 17:58:54.698305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.619 [2024-10-25 17:58:54.698447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.619 [2024-10-25 17:58:54.698459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:36.619 [2024-10-25 17:58:54.698467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:36.619 [2024-10-25 17:58:54.698478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.619 [2024-10-25 17:58:54.736318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.619 [2024-10-25 17:58:54.736365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.619 [2024-10-25 17:58:54.736380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.797 ms 00:17:36.620 [2024-10-25 17:58:54.736390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.736496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.736509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.620 [2024-10-25 17:58:54.736518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:36.620 [2024-10-25 17:58:54.736527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.736873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.736900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.620 [2024-10-25 17:58:54.736909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:36.620 [2024-10-25 17:58:54.736918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.737038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.737057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.620 [2024-10-25 17:58:54.737066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:36.620 [2024-10-25 17:58:54.737077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.752178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.752328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.620 [2024-10-25 17:58:54.752345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.062 ms 00:17:36.620 [2024-10-25 17:58:54.752356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.763798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:36.620 [2024-10-25 17:58:54.777440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.777575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:36.620 [2024-10-25 17:58:54.777628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.969 ms 00:17:36.620 [2024-10-25 17:58:54.777653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.841585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.841747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:36.620 [2024-10-25 17:58:54.841805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.837 ms 00:17:36.620 [2024-10-25 17:58:54.841830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.842059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.842133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:36.620 [2024-10-25 17:58:54.842206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:36.620 [2024-10-25 17:58:54.842229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.878349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.878502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:36.620 [2024-10-25 17:58:54.878586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.073 ms 00:17:36.620 [2024-10-25 17:58:54.878615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.901247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.901362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:36.620 [2024-10-25 17:58:54.901453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.559 ms 00:17:36.620 [2024-10-25 17:58:54.901476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.902085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.902168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:36.620 [2024-10-25 17:58:54.902224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:36.620 [2024-10-25 17:58:54.902248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.969118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.969272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:36.620 [2024-10-25 17:58:54.969335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.765 ms 00:17:36.620 [2024-10-25 17:58:54.969361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:54.993397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:54.993535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:36.620 [2024-10-25 17:58:54.993608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.925 ms 00:17:36.620 [2024-10-25 17:58:54.993633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:55.016651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:55.016762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:36.620 [2024-10-25 17:58:55.016812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.946 ms 00:17:36.620 [2024-10-25 17:58:55.016833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:55.040597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:55.040736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:36.620 [2024-10-25 17:58:55.040812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.679 ms 00:17:36.620 [2024-10-25 17:58:55.040855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:55.040962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:55.041117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:36.620 [2024-10-25 17:58:55.041146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:36.620 [2024-10-25 17:58:55.041168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:55.041254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.620 [2024-10-25 17:58:55.041318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:36.620 [2024-10-25 17:58:55.041373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:36.620 [2024-10-25 17:58:55.041392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.620 [2024-10-25 17:58:55.042196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.620 [2024-10-25 17:58:55.045465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2739.716 ms, result 0 00:17:36.620 [2024-10-25 17:58:55.046332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:36.620 { 00:17:36.620 "name": "ftl0", 00:17:36.620 "uuid": "12f9a83d-3b9c-43be-8d23-b592e8419cb1" 00:17:36.620 } 00:17:36.879 17:58:55 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:36.879 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:37.139 [ 00:17:37.139 { 00:17:37.139 "name": "ftl0", 00:17:37.139 "aliases": [ 00:17:37.139 "12f9a83d-3b9c-43be-8d23-b592e8419cb1" 00:17:37.139 ], 00:17:37.139 "product_name": "FTL disk", 00:17:37.139 "block_size": 4096, 00:17:37.139 "num_blocks": 23592960, 00:17:37.139 "uuid": "12f9a83d-3b9c-43be-8d23-b592e8419cb1", 00:17:37.139 "assigned_rate_limits": { 00:17:37.139 "rw_ios_per_sec": 0, 00:17:37.139 "rw_mbytes_per_sec": 0, 00:17:37.139 "r_mbytes_per_sec": 0, 00:17:37.139 "w_mbytes_per_sec": 0 00:17:37.139 }, 00:17:37.139 "claimed": false, 00:17:37.139 "zoned": false, 00:17:37.139 "supported_io_types": { 00:17:37.139 "read": true, 00:17:37.139 "write": true, 00:17:37.139 "unmap": true, 00:17:37.139 "flush": true, 00:17:37.139 "reset": false, 00:17:37.139 "nvme_admin": false, 00:17:37.139 "nvme_io": false, 00:17:37.139 "nvme_io_md": false, 00:17:37.139 "write_zeroes": true, 00:17:37.139 "zcopy": false, 00:17:37.139 "get_zone_info": false, 00:17:37.139 "zone_management": false, 00:17:37.139 "zone_append": false, 00:17:37.139 "compare": false, 00:17:37.139 "compare_and_write": false, 00:17:37.139 "abort": false, 00:17:37.139 "seek_hole": false, 00:17:37.139 "seek_data": false, 00:17:37.139 "copy": false, 00:17:37.139 "nvme_iov_md": false 00:17:37.139 }, 00:17:37.139 "driver_specific": { 00:17:37.139 "ftl": { 00:17:37.139 "base_bdev": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:37.139 "cache": "nvc0n1p0" 00:17:37.139 } 00:17:37.139 } 00:17:37.139 } 00:17:37.139 ] 00:17:37.139 17:58:55 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:37.139 17:58:55 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:37.139 17:58:55 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:37.398 17:58:55 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:37.398 17:58:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:37.656 17:58:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:37.657 { 00:17:37.657 "name": "ftl0", 00:17:37.657 "aliases": [ 00:17:37.657 "12f9a83d-3b9c-43be-8d23-b592e8419cb1" 00:17:37.657 ], 00:17:37.657 "product_name": "FTL disk", 00:17:37.657 "block_size": 4096, 00:17:37.657 "num_blocks": 23592960, 00:17:37.657 "uuid": "12f9a83d-3b9c-43be-8d23-b592e8419cb1", 00:17:37.657 "assigned_rate_limits": { 00:17:37.657 "rw_ios_per_sec": 0, 00:17:37.657 "rw_mbytes_per_sec": 0, 00:17:37.657 "r_mbytes_per_sec": 0, 00:17:37.657 "w_mbytes_per_sec": 0 00:17:37.657 }, 00:17:37.657 "claimed": false, 00:17:37.657 "zoned": false, 00:17:37.657 "supported_io_types": { 00:17:37.657 "read": true, 00:17:37.657 "write": true, 00:17:37.657 "unmap": true, 00:17:37.657 "flush": true, 00:17:37.657 "reset": false, 00:17:37.657 "nvme_admin": false, 00:17:37.657 "nvme_io": false, 00:17:37.657 "nvme_io_md": false, 00:17:37.657 "write_zeroes": true, 00:17:37.657 "zcopy": false, 00:17:37.657 "get_zone_info": false, 00:17:37.657 "zone_management": false, 00:17:37.657 "zone_append": false, 00:17:37.657 "compare": false, 00:17:37.657 "compare_and_write": false, 00:17:37.657 "abort": false, 00:17:37.657 "seek_hole": false, 00:17:37.657 "seek_data": false, 00:17:37.657 "copy": false, 00:17:37.657 "nvme_iov_md": false 00:17:37.657 }, 00:17:37.657 "driver_specific": { 00:17:37.657 "ftl": { 00:17:37.657 "base_bdev": "fb2afcdb-feb6-47d7-b2e8-d565b92af853", 00:17:37.657 "cache": "nvc0n1p0" 00:17:37.657 } 00:17:37.657 } 00:17:37.657 } 00:17:37.657 ]' 00:17:37.657 17:58:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:37.657 17:58:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:37.657 17:58:55 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:37.657 [2024-10-25 17:58:56.082197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.657 [2024-10-25 17:58:56.082249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:37.657 [2024-10-25 17:58:56.082263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.657 [2024-10-25 17:58:56.082272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.657 [2024-10-25 17:58:56.082307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:37.657 [2024-10-25 17:58:56.084892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.657 [2024-10-25 17:58:56.084920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:37.657 [2024-10-25 17:58:56.084936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:17:37.657 [2024-10-25 17:58:56.084945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.657 [2024-10-25 17:58:56.085433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.657 [2024-10-25 17:58:56.085460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:37.657 [2024-10-25 17:58:56.085471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:17:37.657 [2024-10-25 17:58:56.085480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.657 [2024-10-25 17:58:56.089216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.657 [2024-10-25 17:58:56.089246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:37.657 [2024-10-25 17:58:56.089259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:17:37.657 [2024-10-25 17:58:56.089269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.096223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.096250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:37.917 [2024-10-25 17:58:56.096261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:17:37.917 [2024-10-25 17:58:56.096270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.119890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.119922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:37.917 [2024-10-25 17:58:56.119937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.544 ms 00:17:37.917 [2024-10-25 17:58:56.119944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.134679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.134714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:37.917 [2024-10-25 17:58:56.134729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:17:37.917 [2024-10-25 17:58:56.134737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.134929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.134942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:37.917 [2024-10-25 17:58:56.134952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:37.917 [2024-10-25 17:58:56.134960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.157910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.158029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:37.917 [2024-10-25 17:58:56.158048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.922 ms 00:17:37.917 [2024-10-25 17:58:56.158056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.180568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.180672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:37.917 [2024-10-25 17:58:56.180690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.457 ms 00:17:37.917 [2024-10-25 17:58:56.180697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.202304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.202331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:37.917 [2024-10-25 17:58:56.202342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.555 ms 00:17:37.917 [2024-10-25 17:58:56.202349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.224489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.917 [2024-10-25 17:58:56.224606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:37.917 [2024-10-25 17:58:56.224624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.042 ms 00:17:37.917 [2024-10-25 17:58:56.224631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.917 [2024-10-25 17:58:56.224682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:37.917 [2024-10-25 17:58:56.224697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.224998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:37.917 [2024-10-25 17:58:56.225113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:37.918 [2024-10-25 17:58:56.225550] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:37.918 [2024-10-25 17:58:56.225570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:17:37.918 [2024-10-25 17:58:56.225578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:37.918 [2024-10-25 17:58:56.225586] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:37.918 [2024-10-25 17:58:56.225593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:37.918 [2024-10-25 17:58:56.225603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:37.918 [2024-10-25 17:58:56.225613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:37.918 [2024-10-25 17:58:56.225622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:37.918 [2024-10-25 17:58:56.225631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:37.918 [2024-10-25 17:58:56.225639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:37.918 [2024-10-25 17:58:56.225645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:37.918 [2024-10-25 17:58:56.225654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.918 [2024-10-25 17:58:56.225661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:37.918 [2024-10-25 17:58:56.225671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:17:37.918 [2024-10-25 17:58:56.225679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.918 [2024-10-25 17:58:56.238031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.918 [2024-10-25 17:58:56.238061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:37.918 [2024-10-25 17:58:56.238074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.321 ms 00:17:37.918 [2024-10-25 17:58:56.238082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.918 [2024-10-25 17:58:56.238459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.918 [2024-10-25 17:58:56.238478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:37.918 [2024-10-25 17:58:56.238489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:37.918 [2024-10-25 17:58:56.238496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.918 [2024-10-25 17:58:56.281487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.918 [2024-10-25 17:58:56.281535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.918 [2024-10-25 17:58:56.281547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.918 [2024-10-25 17:58:56.281569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.918 [2024-10-25 17:58:56.281691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.918 [2024-10-25 17:58:56.281700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.918 [2024-10-25 17:58:56.281710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.918 [2024-10-25 17:58:56.281717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.918 [2024-10-25 17:58:56.281781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.918 [2024-10-25 17:58:56.281790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.918 [2024-10-25 17:58:56.281801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.918 [2024-10-25 17:58:56.281809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.918 [2024-10-25 17:58:56.281843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.918 [2024-10-25 17:58:56.281850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.918 [2024-10-25 17:58:56.281859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.918 [2024-10-25 17:58:56.281867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.362648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.362696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.177 [2024-10-25 17:58:56.362709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.362716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.424728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.424773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.177 [2024-10-25 17:58:56.424785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.424793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.424877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.424886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.177 [2024-10-25 17:58:56.424909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.424917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.424965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.424976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.177 [2024-10-25 17:58:56.424985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.424992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.425097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.425106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.177 [2024-10-25 17:58:56.425115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.425122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.425168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.425176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:38.177 [2024-10-25 17:58:56.425187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.425194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.425242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.425250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.177 [2024-10-25 17:58:56.425263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.425270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.425326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.177 [2024-10-25 17:58:56.425337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.177 [2024-10-25 17:58:56.425346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.177 [2024-10-25 17:58:56.425353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.177 [2024-10-25 17:58:56.425526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.318 ms, result 0 00:17:38.177 true 00:17:38.177 17:58:56 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73509 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73509 ']' 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73509 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73509 00:17:38.177 killing process with pid 73509 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73509' 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73509 00:17:38.177 17:58:56 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73509 00:17:50.375 17:59:08 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:50.941 65536+0 records in 00:17:50.941 65536+0 records out 00:17:50.941 268435456 bytes (268 MB, 256 MiB) copied, 1.06856 s, 251 MB/s 00:17:50.941 17:59:09 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.941 [2024-10-25 17:59:09.286996] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:50.941 [2024-10-25 17:59:09.287118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73685 ] 00:17:51.199 [2024-10-25 17:59:09.445304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.199 [2024-10-25 17:59:09.542950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.458 [2024-10-25 17:59:09.794085] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.458 [2024-10-25 17:59:09.794149] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.717 [2024-10-25 17:59:09.948240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.948299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.717 [2024-10-25 17:59:09.948312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.717 [2024-10-25 17:59:09.948320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.950972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.951148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.717 [2024-10-25 17:59:09.951165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:17:51.717 [2024-10-25 17:59:09.951173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.951239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.717 [2024-10-25 17:59:09.951949] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.717 [2024-10-25 17:59:09.951971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.951980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.717 [2024-10-25 17:59:09.951989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:17:51.717 [2024-10-25 17:59:09.951996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.953090] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:51.717 [2024-10-25 17:59:09.965337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.965371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:51.717 [2024-10-25 17:59:09.965386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:17:51.717 [2024-10-25 17:59:09.965394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.965495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.965506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:51.717 [2024-10-25 17:59:09.965515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:51.717 [2024-10-25 17:59:09.965522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.970325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.970357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.717 [2024-10-25 17:59:09.970366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:17:51.717 [2024-10-25 17:59:09.970374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.970463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.970473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.717 [2024-10-25 17:59:09.970481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:51.717 [2024-10-25 17:59:09.970488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.970515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.970523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.717 [2024-10-25 17:59:09.970534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:51.717 [2024-10-25 17:59:09.970541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.970577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:51.717 [2024-10-25 17:59:09.973945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.973973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.717 [2024-10-25 17:59:09.973983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:17:51.717 [2024-10-25 17:59:09.973990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.974024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.974032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.717 [2024-10-25 17:59:09.974040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:51.717 [2024-10-25 17:59:09.974047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.974067] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:51.717 [2024-10-25 17:59:09.974087] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:51.717 [2024-10-25 17:59:09.974123] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:51.717 [2024-10-25 17:59:09.974138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:51.717 [2024-10-25 17:59:09.974239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:51.717 [2024-10-25 17:59:09.974249] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.717 [2024-10-25 17:59:09.974259] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:51.717 [2024-10-25 17:59:09.974269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.717 [2024-10-25 17:59:09.974278] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.717 [2024-10-25 17:59:09.974288] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:51.717 [2024-10-25 17:59:09.974295] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.717 [2024-10-25 17:59:09.974303] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:51.717 [2024-10-25 17:59:09.974309] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:51.717 [2024-10-25 17:59:09.974317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.974325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.717 [2024-10-25 17:59:09.974333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:17:51.717 [2024-10-25 17:59:09.974339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.974430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.717 [2024-10-25 17:59:09.974439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.717 [2024-10-25 17:59:09.974446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:51.717 [2024-10-25 17:59:09.974456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.717 [2024-10-25 17:59:09.974586] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.717 [2024-10-25 17:59:09.974598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.717 [2024-10-25 17:59:09.974606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.717 [2024-10-25 17:59:09.974614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.717 [2024-10-25 17:59:09.974622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.717 [2024-10-25 17:59:09.974629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.717 [2024-10-25 17:59:09.974635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:51.717 [2024-10-25 17:59:09.974643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.717 [2024-10-25 17:59:09.974650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:51.717 [2024-10-25 17:59:09.974656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.717 [2024-10-25 17:59:09.974663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.717 [2024-10-25 17:59:09.974670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:51.717 [2024-10-25 17:59:09.974676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.718 [2024-10-25 17:59:09.974688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.718 [2024-10-25 17:59:09.974700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:51.718 [2024-10-25 17:59:09.974707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.718 [2024-10-25 17:59:09.974720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.718 [2024-10-25 17:59:09.974740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.718 [2024-10-25 17:59:09.974760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.718 [2024-10-25 17:59:09.974779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.718 [2024-10-25 17:59:09.974798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.718 [2024-10-25 17:59:09.974818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.718 [2024-10-25 17:59:09.974831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.718 [2024-10-25 17:59:09.974838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:51.718 [2024-10-25 17:59:09.974844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.718 [2024-10-25 17:59:09.974850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:51.718 [2024-10-25 17:59:09.974856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:51.718 [2024-10-25 17:59:09.974863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:51.718 [2024-10-25 17:59:09.974875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:51.718 [2024-10-25 17:59:09.974882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974890] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.718 [2024-10-25 17:59:09.974903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.718 [2024-10-25 17:59:09.974910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.718 [2024-10-25 17:59:09.974928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.718 [2024-10-25 17:59:09.974935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.718 [2024-10-25 17:59:09.974942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.718 [2024-10-25 17:59:09.974948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.718 [2024-10-25 17:59:09.974955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.718 [2024-10-25 17:59:09.974961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.718 [2024-10-25 17:59:09.974970] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.718 [2024-10-25 17:59:09.974979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.974987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:51.718 [2024-10-25 17:59:09.974994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:51.718 [2024-10-25 17:59:09.975001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:51.718 [2024-10-25 17:59:09.975007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:51.718 [2024-10-25 17:59:09.975014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:51.718 [2024-10-25 17:59:09.975021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:51.718 [2024-10-25 17:59:09.975027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:51.718 [2024-10-25 17:59:09.975034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:51.718 [2024-10-25 17:59:09.975041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:51.718 [2024-10-25 17:59:09.975048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.975055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.975062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.975073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.975084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:51.718 [2024-10-25 17:59:09.975091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.718 [2024-10-25 17:59:09.975098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.975107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.718 [2024-10-25 17:59:09.975114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.718 [2024-10-25 17:59:09.975121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.718 [2024-10-25 17:59:09.975128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.718 [2024-10-25 17:59:09.975135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:09.975142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.718 [2024-10-25 17:59:09.975149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:17:51.718 [2024-10-25 17:59:09.975160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.000793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.000972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.718 [2024-10-25 17:59:10.000989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.581 ms 00:17:51.718 [2024-10-25 17:59:10.000997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.001123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.001133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.718 [2024-10-25 17:59:10.001146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:51.718 [2024-10-25 17:59:10.001153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.043550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.043606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.718 [2024-10-25 17:59:10.043619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.374 ms 00:17:51.718 [2024-10-25 17:59:10.043627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.043744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.043757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.718 [2024-10-25 17:59:10.043766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.718 [2024-10-25 17:59:10.043774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.044092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.044112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.718 [2024-10-25 17:59:10.044121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:51.718 [2024-10-25 17:59:10.044129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.044259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.044268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.718 [2024-10-25 17:59:10.044277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:51.718 [2024-10-25 17:59:10.044288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.057570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.057601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.718 [2024-10-25 17:59:10.057612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.255 ms 00:17:51.718 [2024-10-25 17:59:10.057619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.718 [2024-10-25 17:59:10.069966] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:51.718 [2024-10-25 17:59:10.070002] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:51.718 [2024-10-25 17:59:10.070015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.718 [2024-10-25 17:59:10.070024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:51.718 [2024-10-25 17:59:10.070032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:17:51.718 [2024-10-25 17:59:10.070039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.719 [2024-10-25 17:59:10.094264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.719 [2024-10-25 17:59:10.094303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.719 [2024-10-25 17:59:10.094324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.148 ms 00:17:51.719 [2024-10-25 17:59:10.094333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.719 [2024-10-25 17:59:10.105948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.719 [2024-10-25 17:59:10.105982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.719 [2024-10-25 17:59:10.105992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.534 ms 00:17:51.719 [2024-10-25 17:59:10.106000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.719 [2024-10-25 17:59:10.117364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.719 [2024-10-25 17:59:10.117602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.719 [2024-10-25 17:59:10.117619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.296 ms 00:17:51.719 [2024-10-25 17:59:10.117626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.719 [2024-10-25 17:59:10.118244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.719 [2024-10-25 17:59:10.118267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.719 [2024-10-25 17:59:10.118276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:17:51.719 [2024-10-25 17:59:10.118283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.172917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.172975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.977 [2024-10-25 17:59:10.172988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.610 ms 00:17:51.977 [2024-10-25 17:59:10.172997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.183472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.977 [2024-10-25 17:59:10.197451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.197509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.977 [2024-10-25 17:59:10.197522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.330 ms 00:17:51.977 [2024-10-25 17:59:10.197530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.197639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.197653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.977 [2024-10-25 17:59:10.197662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:51.977 [2024-10-25 17:59:10.197670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.197716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.197725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.977 [2024-10-25 17:59:10.197732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:51.977 [2024-10-25 17:59:10.197740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.197764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.197772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.977 [2024-10-25 17:59:10.197783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:51.977 [2024-10-25 17:59:10.197790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.197821] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.977 [2024-10-25 17:59:10.197831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.197838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.977 [2024-10-25 17:59:10.197845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:51.977 [2024-10-25 17:59:10.197852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.221005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.221049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.977 [2024-10-25 17:59:10.221061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.134 ms 00:17:51.977 [2024-10-25 17:59:10.221070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.221162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.977 [2024-10-25 17:59:10.221172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.977 [2024-10-25 17:59:10.221181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:51.977 [2024-10-25 17:59:10.221188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.977 [2024-10-25 17:59:10.221988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.977 [2024-10-25 17:59:10.224865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.457 ms, result 0 00:17:51.977 [2024-10-25 17:59:10.225490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.977 [2024-10-25 17:59:10.238392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:52.921  [2024-10-25T17:59:12.289Z] Copying: 42/256 [MB] (42 MBps) [2024-10-25T17:59:13.660Z] Copying: 84/256 [MB] (42 MBps) [2024-10-25T17:59:14.594Z] Copying: 127/256 [MB] (43 MBps) [2024-10-25T17:59:15.528Z] Copying: 169/256 [MB] (41 MBps) [2024-10-25T17:59:16.461Z] Copying: 210/256 [MB] (41 MBps) [2024-10-25T17:59:16.461Z] Copying: 252/256 [MB] (41 MBps) [2024-10-25T17:59:16.461Z] Copying: 256/256 [MB] (average 42 MBps)[2024-10-25 17:59:16.333642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.026 [2024-10-25 17:59:16.343223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.026 [2024-10-25 17:59:16.343274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.026 [2024-10-25 17:59:16.343289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.026 [2024-10-25 17:59:16.343298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.026 [2024-10-25 17:59:16.343322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:58.026 [2024-10-25 17:59:16.346079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.026 [2024-10-25 17:59:16.346126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.026 [2024-10-25 17:59:16.346137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:17:58.026 [2024-10-25 17:59:16.346145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.026 [2024-10-25 17:59:16.347788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.026 [2024-10-25 17:59:16.347974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.026 [2024-10-25 17:59:16.347992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:17:58.026 [2024-10-25 17:59:16.348001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.026 [2024-10-25 17:59:16.354849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.026 [2024-10-25 17:59:16.355010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.026 [2024-10-25 17:59:16.355027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:17:58.026 [2024-10-25 17:59:16.355047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.026 [2024-10-25 17:59:16.362047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.026 [2024-10-25 17:59:16.362191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:58.026 [2024-10-25 17:59:16.362207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:17:58.027 [2024-10-25 17:59:16.362215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.027 [2024-10-25 17:59:16.386757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.027 [2024-10-25 17:59:16.386814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:58.027 [2024-10-25 17:59:16.386828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.478 ms 00:17:58.027 [2024-10-25 17:59:16.386836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.027 [2024-10-25 17:59:16.401410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.027 [2024-10-25 17:59:16.401621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:58.027 [2024-10-25 17:59:16.401651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.520 ms 00:17:58.027 [2024-10-25 17:59:16.401661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.027 [2024-10-25 17:59:16.401822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.027 [2024-10-25 17:59:16.401833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:58.027 [2024-10-25 17:59:16.401843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:58.027 [2024-10-25 17:59:16.401851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.027 [2024-10-25 17:59:16.426495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.027 [2024-10-25 17:59:16.426552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:58.027 [2024-10-25 17:59:16.426585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.624 ms 00:17:58.027 [2024-10-25 17:59:16.426594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.027 [2024-10-25 17:59:16.450056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.027 [2024-10-25 17:59:16.450111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:58.027 [2024-10-25 17:59:16.450125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.405 ms 00:17:58.027 [2024-10-25 17:59:16.450134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.286 [2024-10-25 17:59:16.473293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.286 [2024-10-25 17:59:16.473494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:58.286 [2024-10-25 17:59:16.473514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.101 ms 00:17:58.286 [2024-10-25 17:59:16.473522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.286 [2024-10-25 17:59:16.496262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.286 [2024-10-25 17:59:16.496458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:58.286 [2024-10-25 17:59:16.496478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.574 ms 00:17:58.286 [2024-10-25 17:59:16.496486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.286 [2024-10-25 17:59:16.496534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:58.286 [2024-10-25 17:59:16.496552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.496997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:58.286 [2024-10-25 17:59:16.497073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:58.287 [2024-10-25 17:59:16.497376] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:58.287 [2024-10-25 17:59:16.497384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:17:58.287 [2024-10-25 17:59:16.497392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:58.287 [2024-10-25 17:59:16.497399] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:58.287 [2024-10-25 17:59:16.497406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:58.287 [2024-10-25 17:59:16.497415] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:58.287 [2024-10-25 17:59:16.497423] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:58.287 [2024-10-25 17:59:16.497431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:58.287 [2024-10-25 17:59:16.497440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:58.287 [2024-10-25 17:59:16.497446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:58.287 [2024-10-25 17:59:16.497453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:58.287 [2024-10-25 17:59:16.497460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.287 [2024-10-25 17:59:16.497477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:58.287 [2024-10-25 17:59:16.497489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:17:58.287 [2024-10-25 17:59:16.497499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.510324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.287 [2024-10-25 17:59:16.510372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:58.287 [2024-10-25 17:59:16.510386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:17:58.287 [2024-10-25 17:59:16.510396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.510810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.287 [2024-10-25 17:59:16.510839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:58.287 [2024-10-25 17:59:16.510849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:58.287 [2024-10-25 17:59:16.510857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.546896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.546951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.287 [2024-10-25 17:59:16.546963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.546971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.547077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.547092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.287 [2024-10-25 17:59:16.547100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.547108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.547156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.547167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.287 [2024-10-25 17:59:16.547175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.547183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.547201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.547210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.287 [2024-10-25 17:59:16.547221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.547228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.627348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.627415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.287 [2024-10-25 17:59:16.627429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.627437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.692986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.693052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.287 [2024-10-25 17:59:16.693073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.693081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.693150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.693160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.287 [2024-10-25 17:59:16.693168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.693176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.693207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.693216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.287 [2024-10-25 17:59:16.693224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.693232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.693338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.693349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.287 [2024-10-25 17:59:16.693357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.287 [2024-10-25 17:59:16.693366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.287 [2024-10-25 17:59:16.693399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.287 [2024-10-25 17:59:16.693408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.288 [2024-10-25 17:59:16.693416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.288 [2024-10-25 17:59:16.693424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.288 [2024-10-25 17:59:16.693477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.288 [2024-10-25 17:59:16.693487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.288 [2024-10-25 17:59:16.693495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.288 [2024-10-25 17:59:16.693502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.288 [2024-10-25 17:59:16.693549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.288 [2024-10-25 17:59:16.693585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.288 [2024-10-25 17:59:16.693595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.288 [2024-10-25 17:59:16.693603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.288 [2024-10-25 17:59:16.693769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.544 ms, result 0 00:17:59.660 00:17:59.660 00:17:59.660 17:59:17 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73782 00:17:59.660 17:59:17 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73782 00:17:59.660 17:59:17 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:59.660 17:59:17 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73782 ']' 00:17:59.660 17:59:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.660 17:59:17 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.660 17:59:17 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.660 17:59:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.660 17:59:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:59.660 [2024-10-25 17:59:17.849133] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:17:59.660 [2024-10-25 17:59:17.849253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73782 ] 00:17:59.660 [2024-10-25 17:59:18.011275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.917 [2024-10-25 17:59:18.127667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.483 17:59:18 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.483 17:59:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:00.483 17:59:18 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:00.803 [2024-10-25 17:59:19.026977] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:00.803 [2024-10-25 17:59:19.027058] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:00.803 [2024-10-25 17:59:19.183776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.183846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.803 [2024-10-25 17:59:19.183863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:00.803 [2024-10-25 17:59:19.183872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.186735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.186777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.803 [2024-10-25 17:59:19.186789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:18:00.803 [2024-10-25 17:59:19.186797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.186905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.803 [2024-10-25 17:59:19.187597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.803 [2024-10-25 17:59:19.187625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.187634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.803 [2024-10-25 17:59:19.187645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:18:00.803 [2024-10-25 17:59:19.187653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.189049] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:00.803 [2024-10-25 17:59:19.201802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.202029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:00.803 [2024-10-25 17:59:19.202051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.757 ms 00:18:00.803 [2024-10-25 17:59:19.202062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.202531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.202596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:00.803 [2024-10-25 17:59:19.202609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:00.803 [2024-10-25 17:59:19.202620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.209528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.209777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.803 [2024-10-25 17:59:19.209797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:18:00.803 [2024-10-25 17:59:19.209807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.209953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.209966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.803 [2024-10-25 17:59:19.209977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:00.803 [2024-10-25 17:59:19.209986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.210015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.210030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.803 [2024-10-25 17:59:19.210038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:00.803 [2024-10-25 17:59:19.210047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.210075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:00.803 [2024-10-25 17:59:19.214001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.214038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.803 [2024-10-25 17:59:19.214050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:18:00.803 [2024-10-25 17:59:19.214058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.214152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.214162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.803 [2024-10-25 17:59:19.214174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:00.803 [2024-10-25 17:59:19.214181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.214205] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:00.803 [2024-10-25 17:59:19.214227] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:00.803 [2024-10-25 17:59:19.214273] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:00.803 [2024-10-25 17:59:19.214290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:00.803 [2024-10-25 17:59:19.214401] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.803 [2024-10-25 17:59:19.214412] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.803 [2024-10-25 17:59:19.214426] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.803 [2024-10-25 17:59:19.214436] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.803 [2024-10-25 17:59:19.214450] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.803 [2024-10-25 17:59:19.214458] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:00.803 [2024-10-25 17:59:19.214467] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.803 [2024-10-25 17:59:19.214475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.803 [2024-10-25 17:59:19.214485] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.803 [2024-10-25 17:59:19.214493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.214503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.803 [2024-10-25 17:59:19.214512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:00.803 [2024-10-25 17:59:19.214521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.214628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.803 [2024-10-25 17:59:19.214641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.803 [2024-10-25 17:59:19.214651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:00.803 [2024-10-25 17:59:19.214661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.803 [2024-10-25 17:59:19.214767] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.803 [2024-10-25 17:59:19.214787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.803 [2024-10-25 17:59:19.214796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.803 [2024-10-25 17:59:19.214805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.803 [2024-10-25 17:59:19.214813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.803 [2024-10-25 17:59:19.214821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.803 [2024-10-25 17:59:19.214829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:00.803 [2024-10-25 17:59:19.214858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.803 [2024-10-25 17:59:19.214866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:00.803 [2024-10-25 17:59:19.214875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.803 [2024-10-25 17:59:19.214882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.803 [2024-10-25 17:59:19.214890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:00.803 [2024-10-25 17:59:19.214896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.803 [2024-10-25 17:59:19.214905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.803 [2024-10-25 17:59:19.214912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:00.803 [2024-10-25 17:59:19.214921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.803 [2024-10-25 17:59:19.214928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.803 [2024-10-25 17:59:19.214936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:00.803 [2024-10-25 17:59:19.214942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.803 [2024-10-25 17:59:19.214951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.803 [2024-10-25 17:59:19.214966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:00.803 [2024-10-25 17:59:19.214974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.804 [2024-10-25 17:59:19.214980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.804 [2024-10-25 17:59:19.214990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.804 [2024-10-25 17:59:19.215009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.804 [2024-10-25 17:59:19.215016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.804 [2024-10-25 17:59:19.215031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.804 [2024-10-25 17:59:19.215039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.804 [2024-10-25 17:59:19.215056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.804 [2024-10-25 17:59:19.215062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.804 [2024-10-25 17:59:19.215078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.804 [2024-10-25 17:59:19.215086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:00.804 [2024-10-25 17:59:19.215093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.804 [2024-10-25 17:59:19.215102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.804 [2024-10-25 17:59:19.215108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:00.804 [2024-10-25 17:59:19.215118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.804 [2024-10-25 17:59:19.215133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:00.804 [2024-10-25 17:59:19.215140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.804 [2024-10-25 17:59:19.215156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.804 [2024-10-25 17:59:19.215165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.804 [2024-10-25 17:59:19.215173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.804 [2024-10-25 17:59:19.215182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.804 [2024-10-25 17:59:19.215189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.804 [2024-10-25 17:59:19.215198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.804 [2024-10-25 17:59:19.215204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.804 [2024-10-25 17:59:19.215212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.804 [2024-10-25 17:59:19.215218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.804 [2024-10-25 17:59:19.215228] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.804 [2024-10-25 17:59:19.215237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:00.804 [2024-10-25 17:59:19.215258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:00.804 [2024-10-25 17:59:19.215268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:00.804 [2024-10-25 17:59:19.215276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:00.804 [2024-10-25 17:59:19.215284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:00.804 [2024-10-25 17:59:19.215291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:00.804 [2024-10-25 17:59:19.215300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:00.804 [2024-10-25 17:59:19.215307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:00.804 [2024-10-25 17:59:19.215315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:00.804 [2024-10-25 17:59:19.215323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:00.804 [2024-10-25 17:59:19.215364] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.804 [2024-10-25 17:59:19.215373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.804 [2024-10-25 17:59:19.215391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.804 [2024-10-25 17:59:19.215399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.804 [2024-10-25 17:59:19.215407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.804 [2024-10-25 17:59:19.215417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.804 [2024-10-25 17:59:19.215424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.804 [2024-10-25 17:59:19.215433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:18:00.804 [2024-10-25 17:59:19.215440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.244406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.244461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.062 [2024-10-25 17:59:19.244478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.904 ms 00:18:01.062 [2024-10-25 17:59:19.244487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.244672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.244686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:01.062 [2024-10-25 17:59:19.244697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:01.062 [2024-10-25 17:59:19.244705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.277211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.277263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.062 [2024-10-25 17:59:19.277278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.466 ms 00:18:01.062 [2024-10-25 17:59:19.277289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.277392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.277402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.062 [2024-10-25 17:59:19.277413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:01.062 [2024-10-25 17:59:19.277421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.277867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.277884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.062 [2024-10-25 17:59:19.277896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:18:01.062 [2024-10-25 17:59:19.277904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.278048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.278063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.062 [2024-10-25 17:59:19.278073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:01.062 [2024-10-25 17:59:19.278081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.293862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.293905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.062 [2024-10-25 17:59:19.293919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.754 ms 00:18:01.062 [2024-10-25 17:59:19.293927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.306927] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:01.062 [2024-10-25 17:59:19.306979] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:01.062 [2024-10-25 17:59:19.306994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.307004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:01.062 [2024-10-25 17:59:19.307017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.910 ms 00:18:01.062 [2024-10-25 17:59:19.307025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.332339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.332584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:01.062 [2024-10-25 17:59:19.332610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.186 ms 00:18:01.062 [2024-10-25 17:59:19.332620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.345029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.345072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:01.062 [2024-10-25 17:59:19.345091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:18:01.062 [2024-10-25 17:59:19.345099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.356723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.356764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:01.062 [2024-10-25 17:59:19.356778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.521 ms 00:18:01.062 [2024-10-25 17:59:19.356786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.357466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.357504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:01.062 [2024-10-25 17:59:19.357515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:01.062 [2024-10-25 17:59:19.357523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.062 [2024-10-25 17:59:19.428058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.062 [2024-10-25 17:59:19.428136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:01.062 [2024-10-25 17:59:19.428156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.501 ms 00:18:01.063 [2024-10-25 17:59:19.428166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.439601] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:01.063 [2024-10-25 17:59:19.456941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.457007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:01.063 [2024-10-25 17:59:19.457023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.606 ms 00:18:01.063 [2024-10-25 17:59:19.457035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.457152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.457164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:01.063 [2024-10-25 17:59:19.457174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:01.063 [2024-10-25 17:59:19.457184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.457240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.457253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:01.063 [2024-10-25 17:59:19.457261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:01.063 [2024-10-25 17:59:19.457270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.457301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.457312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:01.063 [2024-10-25 17:59:19.457320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:01.063 [2024-10-25 17:59:19.457330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.457366] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:01.063 [2024-10-25 17:59:19.457379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.457387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:01.063 [2024-10-25 17:59:19.457399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:01.063 [2024-10-25 17:59:19.457406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.482282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.482498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:01.063 [2024-10-25 17:59:19.482524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.847 ms 00:18:01.063 [2024-10-25 17:59:19.482532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.482671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.063 [2024-10-25 17:59:19.482684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:01.063 [2024-10-25 17:59:19.482695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:01.063 [2024-10-25 17:59:19.482702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.063 [2024-10-25 17:59:19.484027] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.063 [2024-10-25 17:59:19.487533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.912 ms, result 0 00:18:01.063 [2024-10-25 17:59:19.488790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:01.321 Some configs were skipped because the RPC state that can call them passed over. 00:18:01.321 17:59:19 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:01.321 [2024-10-25 17:59:19.723666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.321 [2024-10-25 17:59:19.723906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:01.321 [2024-10-25 17:59:19.723927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:18:01.321 [2024-10-25 17:59:19.723939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.321 [2024-10-25 17:59:19.723979] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.914 ms, result 0 00:18:01.321 true 00:18:01.321 17:59:19 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:01.579 [2024-10-25 17:59:19.935437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.579 [2024-10-25 17:59:19.935650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:01.579 [2024-10-25 17:59:19.935676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:18:01.579 [2024-10-25 17:59:19.935685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.579 [2024-10-25 17:59:19.935729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.468 ms, result 0 00:18:01.579 true 00:18:01.579 17:59:19 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73782 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73782 ']' 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73782 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73782 00:18:01.579 killing process with pid 73782 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73782' 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73782 00:18:01.579 17:59:19 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73782 00:18:02.515 [2024-10-25 17:59:20.658927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.658989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:02.515 [2024-10-25 17:59:20.659002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:02.515 [2024-10-25 17:59:20.659010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.659030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:02.515 [2024-10-25 17:59:20.661208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.661246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:02.515 [2024-10-25 17:59:20.661260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:18:02.515 [2024-10-25 17:59:20.661268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.661536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.661567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:02.515 [2024-10-25 17:59:20.661577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:18:02.515 [2024-10-25 17:59:20.661583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.664743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.664767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:02.515 [2024-10-25 17:59:20.664777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.142 ms 00:18:02.515 [2024-10-25 17:59:20.664786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.670296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.670471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:02.515 [2024-10-25 17:59:20.670492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.477 ms 00:18:02.515 [2024-10-25 17:59:20.670498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.678695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.678809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:02.515 [2024-10-25 17:59:20.678867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.137 ms 00:18:02.515 [2024-10-25 17:59:20.678893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.685859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.685980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:02.515 [2024-10-25 17:59:20.686038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:18:02.515 [2024-10-25 17:59:20.686056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.686176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.686197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:02.515 [2024-10-25 17:59:20.686215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:02.515 [2024-10-25 17:59:20.686265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.694401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.694522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:02.515 [2024-10-25 17:59:20.694587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.101 ms 00:18:02.515 [2024-10-25 17:59:20.694607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.702532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.702671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:02.515 [2024-10-25 17:59:20.702742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.867 ms 00:18:02.515 [2024-10-25 17:59:20.702760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.710360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.710471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:02.515 [2024-10-25 17:59:20.710519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:18:02.515 [2024-10-25 17:59:20.710536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.717484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.515 [2024-10-25 17:59:20.717643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:02.515 [2024-10-25 17:59:20.717694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.842 ms 00:18:02.515 [2024-10-25 17:59:20.717710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.515 [2024-10-25 17:59:20.717749] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:02.515 [2024-10-25 17:59:20.717830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.717859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.717884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.717908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.717931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.717960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.718984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.719988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:02.515 [2024-10-25 17:59:20.720423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:02.516 [2024-10-25 17:59:20.720869] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:02.516 [2024-10-25 17:59:20.720879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:18:02.516 [2024-10-25 17:59:20.720896] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:02.516 [2024-10-25 17:59:20.720907] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:02.516 [2024-10-25 17:59:20.720914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:02.516 [2024-10-25 17:59:20.720923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:02.516 [2024-10-25 17:59:20.720929] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:02.516 [2024-10-25 17:59:20.720937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:02.516 [2024-10-25 17:59:20.720943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:02.516 [2024-10-25 17:59:20.720950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:02.516 [2024-10-25 17:59:20.720955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:02.516 [2024-10-25 17:59:20.720965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.516 [2024-10-25 17:59:20.720971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:02.516 [2024-10-25 17:59:20.720982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:18:02.516 [2024-10-25 17:59:20.720988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.731590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.516 [2024-10-25 17:59:20.731707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:02.516 [2024-10-25 17:59:20.731792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.527 ms 00:18:02.516 [2024-10-25 17:59:20.731811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.732167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.516 [2024-10-25 17:59:20.732263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:02.516 [2024-10-25 17:59:20.732312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:02.516 [2024-10-25 17:59:20.732373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.768711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.768880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.516 [2024-10-25 17:59:20.768931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.768950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.770081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.770180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.516 [2024-10-25 17:59:20.770246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.770267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.770334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.770410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.516 [2024-10-25 17:59:20.770434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.770450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.770478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.770495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.516 [2024-10-25 17:59:20.770512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.770528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.833319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.833522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.516 [2024-10-25 17:59:20.833590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.833610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.885758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.885950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.516 [2024-10-25 17:59:20.886018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.886162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.886223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.516 [2024-10-25 17:59:20.886247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.886325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.886345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.516 [2024-10-25 17:59:20.886363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.886478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.886500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.516 [2024-10-25 17:59:20.886517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.886586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.886652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.516 [2024-10-25 17:59:20.886673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.886738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.886759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.516 [2024-10-25 17:59:20.886778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.886888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.516 [2024-10-25 17:59:20.886910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.516 [2024-10-25 17:59:20.886927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.516 [2024-10-25 17:59:20.886943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.516 [2024-10-25 17:59:20.887082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 228.130 ms, result 0 00:18:03.080 17:59:21 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:03.080 17:59:21 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:03.081 [2024-10-25 17:59:21.506091] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:03.081 [2024-10-25 17:59:21.506375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73831 ] 00:18:03.339 [2024-10-25 17:59:21.661102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.339 [2024-10-25 17:59:21.754063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.597 [2024-10-25 17:59:21.987700] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.597 [2024-10-25 17:59:21.987957] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.857 [2024-10-25 17:59:22.141751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.141822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:03.857 [2024-10-25 17:59:22.141834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:03.857 [2024-10-25 17:59:22.141841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.144149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.144191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.857 [2024-10-25 17:59:22.144201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:18:03.857 [2024-10-25 17:59:22.144208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.144293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:03.857 [2024-10-25 17:59:22.144882] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:03.857 [2024-10-25 17:59:22.145074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.145084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.857 [2024-10-25 17:59:22.145092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:18:03.857 [2024-10-25 17:59:22.145098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.146848] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:03.857 [2024-10-25 17:59:22.157207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.157258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:03.857 [2024-10-25 17:59:22.157274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.361 ms 00:18:03.857 [2024-10-25 17:59:22.157282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.157396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.157407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:03.857 [2024-10-25 17:59:22.157414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:03.857 [2024-10-25 17:59:22.157421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.163997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.164219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.857 [2024-10-25 17:59:22.164236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.535 ms 00:18:03.857 [2024-10-25 17:59:22.164243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.164370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.164379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.857 [2024-10-25 17:59:22.164386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:03.857 [2024-10-25 17:59:22.164392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.164420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.164427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:03.857 [2024-10-25 17:59:22.164435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:03.857 [2024-10-25 17:59:22.164442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.164463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:03.857 [2024-10-25 17:59:22.167578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.167604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.857 [2024-10-25 17:59:22.167612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:18:03.857 [2024-10-25 17:59:22.167618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.167665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.857 [2024-10-25 17:59:22.167674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:03.857 [2024-10-25 17:59:22.167682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:03.857 [2024-10-25 17:59:22.167688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.857 [2024-10-25 17:59:22.167704] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:03.857 [2024-10-25 17:59:22.167722] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:03.857 [2024-10-25 17:59:22.167754] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:03.857 [2024-10-25 17:59:22.167767] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:03.857 [2024-10-25 17:59:22.167850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:03.857 [2024-10-25 17:59:22.167859] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:03.858 [2024-10-25 17:59:22.167868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:03.858 [2024-10-25 17:59:22.167877] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:03.858 [2024-10-25 17:59:22.167885] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:03.858 [2024-10-25 17:59:22.167894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:03.858 [2024-10-25 17:59:22.167900] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:03.858 [2024-10-25 17:59:22.167906] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:03.858 [2024-10-25 17:59:22.167912] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:03.858 [2024-10-25 17:59:22.167918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.858 [2024-10-25 17:59:22.167924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:03.858 [2024-10-25 17:59:22.167931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:18:03.858 [2024-10-25 17:59:22.167937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.858 [2024-10-25 17:59:22.168007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.858 [2024-10-25 17:59:22.168014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:03.858 [2024-10-25 17:59:22.168021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:03.858 [2024-10-25 17:59:22.168028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.858 [2024-10-25 17:59:22.168106] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:03.858 [2024-10-25 17:59:22.168115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:03.858 [2024-10-25 17:59:22.168123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:03.858 [2024-10-25 17:59:22.168141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:03.858 [2024-10-25 17:59:22.168159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.858 [2024-10-25 17:59:22.168169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:03.858 [2024-10-25 17:59:22.168174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:03.858 [2024-10-25 17:59:22.168182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.858 [2024-10-25 17:59:22.168196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:03.858 [2024-10-25 17:59:22.168202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:03.858 [2024-10-25 17:59:22.168207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:03.858 [2024-10-25 17:59:22.168218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:03.858 [2024-10-25 17:59:22.168235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:03.858 [2024-10-25 17:59:22.168251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:03.858 [2024-10-25 17:59:22.168267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:03.858 [2024-10-25 17:59:22.168283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:03.858 [2024-10-25 17:59:22.168299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.858 [2024-10-25 17:59:22.168310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:03.858 [2024-10-25 17:59:22.168315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:03.858 [2024-10-25 17:59:22.168320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.858 [2024-10-25 17:59:22.168326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:03.858 [2024-10-25 17:59:22.168331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:03.858 [2024-10-25 17:59:22.168337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:03.858 [2024-10-25 17:59:22.168347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:03.858 [2024-10-25 17:59:22.168353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168358] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:03.858 [2024-10-25 17:59:22.168366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:03.858 [2024-10-25 17:59:22.168373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.858 [2024-10-25 17:59:22.168387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:03.858 [2024-10-25 17:59:22.168392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:03.858 [2024-10-25 17:59:22.168398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:03.858 [2024-10-25 17:59:22.168403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:03.858 [2024-10-25 17:59:22.168408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:03.858 [2024-10-25 17:59:22.168415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:03.858 [2024-10-25 17:59:22.168422] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:03.858 [2024-10-25 17:59:22.168429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:03.858 [2024-10-25 17:59:22.168442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:03.858 [2024-10-25 17:59:22.168447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:03.858 [2024-10-25 17:59:22.168453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:03.858 [2024-10-25 17:59:22.168459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:03.858 [2024-10-25 17:59:22.168464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:03.858 [2024-10-25 17:59:22.168471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:03.858 [2024-10-25 17:59:22.168477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:03.858 [2024-10-25 17:59:22.168482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:03.858 [2024-10-25 17:59:22.168488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:03.858 [2024-10-25 17:59:22.168516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:03.858 [2024-10-25 17:59:22.168522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:03.858 [2024-10-25 17:59:22.168535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:03.858 [2024-10-25 17:59:22.168542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:03.858 [2024-10-25 17:59:22.168548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:03.858 [2024-10-25 17:59:22.168567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.858 [2024-10-25 17:59:22.168575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:03.858 [2024-10-25 17:59:22.168581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:18:03.858 [2024-10-25 17:59:22.168591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.858 [2024-10-25 17:59:22.192925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.858 [2024-10-25 17:59:22.192972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.858 [2024-10-25 17:59:22.192984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.274 ms 00:18:03.858 [2024-10-25 17:59:22.192991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.858 [2024-10-25 17:59:22.193131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.858 [2024-10-25 17:59:22.193145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.858 [2024-10-25 17:59:22.193157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:03.858 [2024-10-25 17:59:22.193163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.858 [2024-10-25 17:59:22.238402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.858 [2024-10-25 17:59:22.238459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.858 [2024-10-25 17:59:22.238471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.215 ms 00:18:03.859 [2024-10-25 17:59:22.238478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.859 [2024-10-25 17:59:22.238638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.859 [2024-10-25 17:59:22.238648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.859 [2024-10-25 17:59:22.238656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.859 [2024-10-25 17:59:22.238663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.859 [2024-10-25 17:59:22.239061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.859 [2024-10-25 17:59:22.239085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.859 [2024-10-25 17:59:22.239094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:18:03.859 [2024-10-25 17:59:22.239102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.859 [2024-10-25 17:59:22.239230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.859 [2024-10-25 17:59:22.239246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.859 [2024-10-25 17:59:22.239254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:03.859 [2024-10-25 17:59:22.239261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.859 [2024-10-25 17:59:22.251443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.859 [2024-10-25 17:59:22.251675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.859 [2024-10-25 17:59:22.251691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:18:03.859 [2024-10-25 17:59:22.251698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.859 [2024-10-25 17:59:22.262259] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:03.859 [2024-10-25 17:59:22.262384] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:03.859 [2024-10-25 17:59:22.262479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.859 [2024-10-25 17:59:22.262497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:03.859 [2024-10-25 17:59:22.262515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.638 ms 00:18:03.859 [2024-10-25 17:59:22.262566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.859 [2024-10-25 17:59:22.282310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.859 [2024-10-25 17:59:22.282524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:03.859 [2024-10-25 17:59:22.282588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.307 ms 00:18:03.859 [2024-10-25 17:59:22.282609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.292284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.292430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:04.117 [2024-10-25 17:59:22.292476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.569 ms 00:18:04.117 [2024-10-25 17:59:22.292495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.301687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.301812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:04.117 [2024-10-25 17:59:22.301858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.107 ms 00:18:04.117 [2024-10-25 17:59:22.301875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.302437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.302516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:04.117 [2024-10-25 17:59:22.302570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:18:04.117 [2024-10-25 17:59:22.302589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.352320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.352541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:04.117 [2024-10-25 17:59:22.352608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.695 ms 00:18:04.117 [2024-10-25 17:59:22.352629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.361725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:04.117 [2024-10-25 17:59:22.377573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.377755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:04.117 [2024-10-25 17:59:22.377799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.809 ms 00:18:04.117 [2024-10-25 17:59:22.377817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.377945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.377970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:04.117 [2024-10-25 17:59:22.377988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:04.117 [2024-10-25 17:59:22.378003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.378062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.378136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:04.117 [2024-10-25 17:59:22.378156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:04.117 [2024-10-25 17:59:22.378171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.378208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.378226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:04.117 [2024-10-25 17:59:22.378244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:04.117 [2024-10-25 17:59:22.378259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.378390] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:04.117 [2024-10-25 17:59:22.378417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.378432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:04.117 [2024-10-25 17:59:22.378448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:04.117 [2024-10-25 17:59:22.378464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.398140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.398340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:04.117 [2024-10-25 17:59:22.398386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.649 ms 00:18:04.117 [2024-10-25 17:59:22.398404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.398611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.117 [2024-10-25 17:59:22.398697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:04.117 [2024-10-25 17:59:22.398719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:04.117 [2024-10-25 17:59:22.398735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.117 [2024-10-25 17:59:22.399629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:04.117 [2024-10-25 17:59:22.402594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.596 ms, result 0 00:18:04.117 [2024-10-25 17:59:22.403302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:04.117 [2024-10-25 17:59:22.414342] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.051  [2024-10-25T17:59:24.426Z] Copying: 47/256 [MB] (47 MBps) [2024-10-25T17:59:25.798Z] Copying: 90/256 [MB] (43 MBps) [2024-10-25T17:59:26.729Z] Copying: 133/256 [MB] (42 MBps) [2024-10-25T17:59:27.659Z] Copying: 175/256 [MB] (42 MBps) [2024-10-25T17:59:28.590Z] Copying: 217/256 [MB] (42 MBps) [2024-10-25T17:59:28.590Z] Copying: 256/256 [MB] (average 43 MBps)[2024-10-25 17:59:28.302358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.155 [2024-10-25 17:59:28.312013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.312215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.155 [2024-10-25 17:59:28.312280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.155 [2024-10-25 17:59:28.312305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.312347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:10.155 [2024-10-25 17:59:28.315285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.315324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.155 [2024-10-25 17:59:28.315337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:18:10.155 [2024-10-25 17:59:28.315346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.315728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.315798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.155 [2024-10-25 17:59:28.315865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:18:10.155 [2024-10-25 17:59:28.315890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.319625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.319705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.155 [2024-10-25 17:59:28.319766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.704 ms 00:18:10.155 [2024-10-25 17:59:28.319788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.326839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.326962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.155 [2024-10-25 17:59:28.327019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.979 ms 00:18:10.155 [2024-10-25 17:59:28.327042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.351583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.351797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.155 [2024-10-25 17:59:28.351889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.446 ms 00:18:10.155 [2024-10-25 17:59:28.351911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.366828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.367037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.155 [2024-10-25 17:59:28.367104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.858 ms 00:18:10.155 [2024-10-25 17:59:28.367127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.367345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.367475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.155 [2024-10-25 17:59:28.367523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:10.155 [2024-10-25 17:59:28.367545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.392489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.392714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.155 [2024-10-25 17:59:28.392769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.852 ms 00:18:10.155 [2024-10-25 17:59:28.392792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.417617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.417817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.155 [2024-10-25 17:59:28.417906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.759 ms 00:18:10.155 [2024-10-25 17:59:28.417927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.441438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.441634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.155 [2024-10-25 17:59:28.441719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.441 ms 00:18:10.155 [2024-10-25 17:59:28.441765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.464482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.155 [2024-10-25 17:59:28.464685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.155 [2024-10-25 17:59:28.464745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.623 ms 00:18:10.155 [2024-10-25 17:59:28.464767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.155 [2024-10-25 17:59:28.464823] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.155 [2024-10-25 17:59:28.464887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.155 [2024-10-25 17:59:28.465651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.465973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.156 [2024-10-25 17:59:28.466595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.157 [2024-10-25 17:59:28.466743] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.157 [2024-10-25 17:59:28.466752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:18:10.157 [2024-10-25 17:59:28.466760] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.157 [2024-10-25 17:59:28.466769] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.157 [2024-10-25 17:59:28.466776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.157 [2024-10-25 17:59:28.466784] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.157 [2024-10-25 17:59:28.466791] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.157 [2024-10-25 17:59:28.466799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.157 [2024-10-25 17:59:28.466807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.157 [2024-10-25 17:59:28.466814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.157 [2024-10-25 17:59:28.466821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.157 [2024-10-25 17:59:28.466829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.157 [2024-10-25 17:59:28.466837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.157 [2024-10-25 17:59:28.466846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 00:18:10.157 [2024-10-25 17:59:28.466857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.157 [2024-10-25 17:59:28.479766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.157 [2024-10-25 17:59:28.479813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.157 [2024-10-25 17:59:28.479826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.854 ms 00:18:10.157 [2024-10-25 17:59:28.479834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.157 [2024-10-25 17:59:28.480215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.157 [2024-10-25 17:59:28.480240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.157 [2024-10-25 17:59:28.480250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:18:10.157 [2024-10-25 17:59:28.480258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.157 [2024-10-25 17:59:28.516315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.157 [2024-10-25 17:59:28.516372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.157 [2024-10-25 17:59:28.516385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.157 [2024-10-25 17:59:28.516393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.157 [2024-10-25 17:59:28.516497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.157 [2024-10-25 17:59:28.516512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.157 [2024-10-25 17:59:28.516520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.157 [2024-10-25 17:59:28.516528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.157 [2024-10-25 17:59:28.516591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.157 [2024-10-25 17:59:28.516601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.157 [2024-10-25 17:59:28.516609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.157 [2024-10-25 17:59:28.516617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.157 [2024-10-25 17:59:28.516636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.157 [2024-10-25 17:59:28.516646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.157 [2024-10-25 17:59:28.516657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.157 [2024-10-25 17:59:28.516665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.598246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.598316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.414 [2024-10-25 17:59:28.598331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.598341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.414 [2024-10-25 17:59:28.664094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.414 [2024-10-25 17:59:28.664195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.414 [2024-10-25 17:59:28.664252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.414 [2024-10-25 17:59:28.664380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.414 [2024-10-25 17:59:28.664439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.414 [2024-10-25 17:59:28.664505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.414 [2024-10-25 17:59:28.664588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.414 [2024-10-25 17:59:28.664597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.414 [2024-10-25 17:59:28.664605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.414 [2024-10-25 17:59:28.664755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.741 ms, result 0 00:18:10.977 00:18:10.977 00:18:10.977 17:59:29 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:10.977 17:59:29 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:11.542 17:59:29 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.799 [2024-10-25 17:59:29.997279] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:11.799 [2024-10-25 17:59:29.997392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73935 ] 00:18:11.799 [2024-10-25 17:59:30.159392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.057 [2024-10-25 17:59:30.275185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.316 [2024-10-25 17:59:30.549448] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.316 [2024-10-25 17:59:30.549543] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.316 [2024-10-25 17:59:30.704920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.704988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.316 [2024-10-25 17:59:30.705002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:12.316 [2024-10-25 17:59:30.705011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.707886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.707932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.316 [2024-10-25 17:59:30.707945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:18:12.316 [2024-10-25 17:59:30.707953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.708113] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.316 [2024-10-25 17:59:30.708881] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.316 [2024-10-25 17:59:30.708905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.708915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.316 [2024-10-25 17:59:30.708925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:18:12.316 [2024-10-25 17:59:30.708934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.710745] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.316 [2024-10-25 17:59:30.723889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.723951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.316 [2024-10-25 17:59:30.723973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.144 ms 00:18:12.316 [2024-10-25 17:59:30.723983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.724219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.724240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.316 [2024-10-25 17:59:30.724250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:12.316 [2024-10-25 17:59:30.724258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.731232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.731289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.316 [2024-10-25 17:59:30.731303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:18:12.316 [2024-10-25 17:59:30.731311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.731451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.731462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.316 [2024-10-25 17:59:30.731471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:12.316 [2024-10-25 17:59:30.731481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.316 [2024-10-25 17:59:30.731515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.316 [2024-10-25 17:59:30.731524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.317 [2024-10-25 17:59:30.731534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:12.317 [2024-10-25 17:59:30.731542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.317 [2024-10-25 17:59:30.731580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:12.317 [2024-10-25 17:59:30.735434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.317 [2024-10-25 17:59:30.735476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.317 [2024-10-25 17:59:30.735488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.862 ms 00:18:12.317 [2024-10-25 17:59:30.735495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.317 [2024-10-25 17:59:30.735604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.317 [2024-10-25 17:59:30.735616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.317 [2024-10-25 17:59:30.735626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:12.317 [2024-10-25 17:59:30.735634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.317 [2024-10-25 17:59:30.735656] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.317 [2024-10-25 17:59:30.735679] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:12.317 [2024-10-25 17:59:30.735717] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.317 [2024-10-25 17:59:30.735734] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:12.317 [2024-10-25 17:59:30.735840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:12.317 [2024-10-25 17:59:30.735851] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.317 [2024-10-25 17:59:30.735862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:12.317 [2024-10-25 17:59:30.735873] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.317 [2024-10-25 17:59:30.735882] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.317 [2024-10-25 17:59:30.735894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:12.317 [2024-10-25 17:59:30.735901] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.317 [2024-10-25 17:59:30.735909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:12.317 [2024-10-25 17:59:30.735916] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:12.317 [2024-10-25 17:59:30.735925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.317 [2024-10-25 17:59:30.735932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.317 [2024-10-25 17:59:30.735941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:12.317 [2024-10-25 17:59:30.735948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.317 [2024-10-25 17:59:30.736036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.317 [2024-10-25 17:59:30.736045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.317 [2024-10-25 17:59:30.736053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:12.317 [2024-10-25 17:59:30.736063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.317 [2024-10-25 17:59:30.736163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.317 [2024-10-25 17:59:30.736174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.317 [2024-10-25 17:59:30.736182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.317 [2024-10-25 17:59:30.736206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.317 [2024-10-25 17:59:30.736227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.317 [2024-10-25 17:59:30.736240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.317 [2024-10-25 17:59:30.736248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:12.317 [2024-10-25 17:59:30.736255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.317 [2024-10-25 17:59:30.736270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.317 [2024-10-25 17:59:30.736276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:12.317 [2024-10-25 17:59:30.736283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.317 [2024-10-25 17:59:30.736295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.317 [2024-10-25 17:59:30.736318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.317 [2024-10-25 17:59:30.736337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.317 [2024-10-25 17:59:30.736356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.317 [2024-10-25 17:59:30.736376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.317 [2024-10-25 17:59:30.736395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.317 [2024-10-25 17:59:30.736408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.317 [2024-10-25 17:59:30.736414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:12.317 [2024-10-25 17:59:30.736421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.317 [2024-10-25 17:59:30.736427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:12.317 [2024-10-25 17:59:30.736434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:12.317 [2024-10-25 17:59:30.736439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:12.317 [2024-10-25 17:59:30.736452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:12.317 [2024-10-25 17:59:30.736458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736465] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.317 [2024-10-25 17:59:30.736473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.317 [2024-10-25 17:59:30.736481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.317 [2024-10-25 17:59:30.736498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.317 [2024-10-25 17:59:30.736505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.317 [2024-10-25 17:59:30.736511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.317 [2024-10-25 17:59:30.736520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.317 [2024-10-25 17:59:30.736526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.317 [2024-10-25 17:59:30.736533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.317 [2024-10-25 17:59:30.736541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.317 [2024-10-25 17:59:30.736550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.317 [2024-10-25 17:59:30.736570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:12.317 [2024-10-25 17:59:30.736579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:12.317 [2024-10-25 17:59:30.736586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:12.317 [2024-10-25 17:59:30.736593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:12.317 [2024-10-25 17:59:30.736601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:12.317 [2024-10-25 17:59:30.736609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:12.317 [2024-10-25 17:59:30.736617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:12.318 [2024-10-25 17:59:30.736625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:12.318 [2024-10-25 17:59:30.736632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:12.318 [2024-10-25 17:59:30.736639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:12.318 [2024-10-25 17:59:30.736646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:12.318 [2024-10-25 17:59:30.736654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:12.318 [2024-10-25 17:59:30.736661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:12.318 [2024-10-25 17:59:30.736669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:12.318 [2024-10-25 17:59:30.736675] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.318 [2024-10-25 17:59:30.736684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.318 [2024-10-25 17:59:30.736699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.318 [2024-10-25 17:59:30.736707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.318 [2024-10-25 17:59:30.736714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.318 [2024-10-25 17:59:30.736721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.318 [2024-10-25 17:59:30.736728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.318 [2024-10-25 17:59:30.736736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.318 [2024-10-25 17:59:30.736743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:18:12.318 [2024-10-25 17:59:30.736753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.765517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.765583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.575 [2024-10-25 17:59:30.765597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.706 ms 00:18:12.575 [2024-10-25 17:59:30.765606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.765782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.765794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.575 [2024-10-25 17:59:30.765807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:12.575 [2024-10-25 17:59:30.765815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.808129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.808193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.575 [2024-10-25 17:59:30.808208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.287 ms 00:18:12.575 [2024-10-25 17:59:30.808217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.808380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.808393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.575 [2024-10-25 17:59:30.808402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.575 [2024-10-25 17:59:30.808410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.808843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.808860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.575 [2024-10-25 17:59:30.808869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:12.575 [2024-10-25 17:59:30.808878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.809026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.809036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.575 [2024-10-25 17:59:30.809045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:12.575 [2024-10-25 17:59:30.809052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.823621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.823669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.575 [2024-10-25 17:59:30.823682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.545 ms 00:18:12.575 [2024-10-25 17:59:30.823690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.836762] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:12.575 [2024-10-25 17:59:30.836813] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.575 [2024-10-25 17:59:30.836828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.575 [2024-10-25 17:59:30.836838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.575 [2024-10-25 17:59:30.836850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.980 ms 00:18:12.575 [2024-10-25 17:59:30.836859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.575 [2024-10-25 17:59:30.866702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.866799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.576 [2024-10-25 17:59:30.866814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.716 ms 00:18:12.576 [2024-10-25 17:59:30.866822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.879698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.879749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.576 [2024-10-25 17:59:30.879763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.729 ms 00:18:12.576 [2024-10-25 17:59:30.879771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.891450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.891497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.576 [2024-10-25 17:59:30.891508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.561 ms 00:18:12.576 [2024-10-25 17:59:30.891516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.892206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.892233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.576 [2024-10-25 17:59:30.892244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:18:12.576 [2024-10-25 17:59:30.892252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.951449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.951530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:12.576 [2024-10-25 17:59:30.951546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.169 ms 00:18:12.576 [2024-10-25 17:59:30.951577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.963181] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:12.576 [2024-10-25 17:59:30.980549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.980615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:12.576 [2024-10-25 17:59:30.980630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.792 ms 00:18:12.576 [2024-10-25 17:59:30.980639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.980775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.980795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:12.576 [2024-10-25 17:59:30.980811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:12.576 [2024-10-25 17:59:30.980819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.980879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.980888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:12.576 [2024-10-25 17:59:30.980897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:12.576 [2024-10-25 17:59:30.980906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.980932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.980944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:12.576 [2024-10-25 17:59:30.980952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.576 [2024-10-25 17:59:30.980960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:30.980995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:12.576 [2024-10-25 17:59:30.981006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:30.981015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:12.576 [2024-10-25 17:59:30.981023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:12.576 [2024-10-25 17:59:30.981032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:31.005296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:31.005354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:12.576 [2024-10-25 17:59:31.005369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.240 ms 00:18:12.576 [2024-10-25 17:59:31.005377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:31.005539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.576 [2024-10-25 17:59:31.005552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:12.576 [2024-10-25 17:59:31.005576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:12.576 [2024-10-25 17:59:31.005584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.576 [2024-10-25 17:59:31.006640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.576 [2024-10-25 17:59:31.010205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.377 ms, result 0 00:18:12.834 [2024-10-25 17:59:31.010972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.834 [2024-10-25 17:59:31.024320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.834  [2024-10-25T17:59:31.269Z] Copying: 4096/4096 [kB] (average 35 MBps)[2024-10-25 17:59:31.139947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.834 [2024-10-25 17:59:31.149926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.149984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:12.834 [2024-10-25 17:59:31.150000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:12.834 [2024-10-25 17:59:31.150008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.150046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:12.834 [2024-10-25 17:59:31.152806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.152840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:12.834 [2024-10-25 17:59:31.152853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.745 ms 00:18:12.834 [2024-10-25 17:59:31.152862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.154438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.154473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:12.834 [2024-10-25 17:59:31.154483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:18:12.834 [2024-10-25 17:59:31.154491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.158443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.158470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:12.834 [2024-10-25 17:59:31.158488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.935 ms 00:18:12.834 [2024-10-25 17:59:31.158496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.165400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.165434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:12.834 [2024-10-25 17:59:31.165444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:18:12.834 [2024-10-25 17:59:31.165452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.190435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.190495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:12.834 [2024-10-25 17:59:31.190510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.897 ms 00:18:12.834 [2024-10-25 17:59:31.190518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.205601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.205664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:12.834 [2024-10-25 17:59:31.205679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:18:12.834 [2024-10-25 17:59:31.205693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.205870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.205882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:12.834 [2024-10-25 17:59:31.205892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:12.834 [2024-10-25 17:59:31.205901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.229718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.229773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:12.834 [2024-10-25 17:59:31.229787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.787 ms 00:18:12.834 [2024-10-25 17:59:31.229795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.834 [2024-10-25 17:59:31.252946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.834 [2024-10-25 17:59:31.253000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:12.834 [2024-10-25 17:59:31.253014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.088 ms 00:18:12.834 [2024-10-25 17:59:31.253021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.094 [2024-10-25 17:59:31.275735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.095 [2024-10-25 17:59:31.275807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.095 [2024-10-25 17:59:31.275820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.655 ms 00:18:13.095 [2024-10-25 17:59:31.275829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.095 [2024-10-25 17:59:31.298718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.095 [2024-10-25 17:59:31.298769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.095 [2024-10-25 17:59:31.298782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.794 ms 00:18:13.095 [2024-10-25 17:59:31.298791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.095 [2024-10-25 17:59:31.298846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.095 [2024-10-25 17:59:31.298864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.298995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.095 [2024-10-25 17:59:31.299368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.096 [2024-10-25 17:59:31.299669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.096 [2024-10-25 17:59:31.299678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:18:13.096 [2024-10-25 17:59:31.299688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.096 [2024-10-25 17:59:31.299702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.096 [2024-10-25 17:59:31.299711] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.096 [2024-10-25 17:59:31.299719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.096 [2024-10-25 17:59:31.299727] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.096 [2024-10-25 17:59:31.299737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.096 [2024-10-25 17:59:31.299744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.096 [2024-10-25 17:59:31.299751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.096 [2024-10-25 17:59:31.299758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.096 [2024-10-25 17:59:31.299765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.096 [2024-10-25 17:59:31.299777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.096 [2024-10-25 17:59:31.299786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:18:13.096 [2024-10-25 17:59:31.299794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.312814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.096 [2024-10-25 17:59:31.312866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.096 [2024-10-25 17:59:31.312880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:18:13.096 [2024-10-25 17:59:31.312889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.313302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.096 [2024-10-25 17:59:31.313319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.096 [2024-10-25 17:59:31.313329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:18:13.096 [2024-10-25 17:59:31.313337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.349392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.349452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.096 [2024-10-25 17:59:31.349466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.349474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.349629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.349641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.096 [2024-10-25 17:59:31.349649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.349658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.349717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.349728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.096 [2024-10-25 17:59:31.349737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.349744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.349761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.349774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.096 [2024-10-25 17:59:31.349782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.349789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.430582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.430648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.096 [2024-10-25 17:59:31.430661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.430669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.497229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.497306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.096 [2024-10-25 17:59:31.497319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.497327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.497415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.497425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.096 [2024-10-25 17:59:31.497434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.096 [2024-10-25 17:59:31.497441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.096 [2024-10-25 17:59:31.497472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.096 [2024-10-25 17:59:31.497480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.096 [2024-10-25 17:59:31.497510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.097 [2024-10-25 17:59:31.497519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.097 [2024-10-25 17:59:31.497636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.097 [2024-10-25 17:59:31.497647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.097 [2024-10-25 17:59:31.497655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.097 [2024-10-25 17:59:31.497662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.097 [2024-10-25 17:59:31.497695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.097 [2024-10-25 17:59:31.497706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.097 [2024-10-25 17:59:31.497714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.097 [2024-10-25 17:59:31.497724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.097 [2024-10-25 17:59:31.497765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.097 [2024-10-25 17:59:31.497775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.097 [2024-10-25 17:59:31.497783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.097 [2024-10-25 17:59:31.497790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.097 [2024-10-25 17:59:31.497836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.097 [2024-10-25 17:59:31.497845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.097 [2024-10-25 17:59:31.497856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.097 [2024-10-25 17:59:31.497863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.097 [2024-10-25 17:59:31.498011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.088 ms, result 0 00:18:14.029 00:18:14.029 00:18:14.029 17:59:32 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:14.029 17:59:32 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=73960 00:18:14.029 17:59:32 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 73960 00:18:14.029 17:59:32 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73960 ']' 00:18:14.029 17:59:32 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.029 17:59:32 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:14.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.029 17:59:32 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.029 17:59:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:14.029 17:59:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:14.029 [2024-10-25 17:59:32.427384] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:14.029 [2024-10-25 17:59:32.427575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73960 ] 00:18:14.286 [2024-10-25 17:59:32.603182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.286 [2024-10-25 17:59:32.712300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.219 17:59:33 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.219 17:59:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:15.219 17:59:33 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:15.219 [2024-10-25 17:59:33.552238] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.219 [2024-10-25 17:59:33.552320] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.479 [2024-10-25 17:59:33.724545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.724630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:15.479 [2024-10-25 17:59:33.724647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:15.479 [2024-10-25 17:59:33.724656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.727503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.727548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.479 [2024-10-25 17:59:33.727577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:18:15.479 [2024-10-25 17:59:33.727588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.727796] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:15.479 [2024-10-25 17:59:33.728525] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:15.479 [2024-10-25 17:59:33.728568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.728577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.479 [2024-10-25 17:59:33.728588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:18:15.479 [2024-10-25 17:59:33.728596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.730488] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:15.479 [2024-10-25 17:59:33.743595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.743653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:15.479 [2024-10-25 17:59:33.743669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.111 ms 00:18:15.479 [2024-10-25 17:59:33.743679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.743805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.743819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:15.479 [2024-10-25 17:59:33.743829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:15.479 [2024-10-25 17:59:33.743839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.750754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.750802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.479 [2024-10-25 17:59:33.750814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.863 ms 00:18:15.479 [2024-10-25 17:59:33.750824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.750951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.750964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.479 [2024-10-25 17:59:33.750973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:15.479 [2024-10-25 17:59:33.750982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.751013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.751026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:15.479 [2024-10-25 17:59:33.751035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:15.479 [2024-10-25 17:59:33.751044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.751073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:15.479 [2024-10-25 17:59:33.754801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.754833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.479 [2024-10-25 17:59:33.754845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:18:15.479 [2024-10-25 17:59:33.754853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.754932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.479 [2024-10-25 17:59:33.754943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:15.479 [2024-10-25 17:59:33.754954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:15.479 [2024-10-25 17:59:33.754961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.479 [2024-10-25 17:59:33.754985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:15.479 [2024-10-25 17:59:33.755008] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:15.479 [2024-10-25 17:59:33.755054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:15.479 [2024-10-25 17:59:33.755071] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:15.479 [2024-10-25 17:59:33.755186] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:15.479 [2024-10-25 17:59:33.755197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:15.479 [2024-10-25 17:59:33.755210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:15.479 [2024-10-25 17:59:33.755220] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:15.479 [2024-10-25 17:59:33.755232] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:15.479 [2024-10-25 17:59:33.755241] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:15.479 [2024-10-25 17:59:33.755251] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:15.479 [2024-10-25 17:59:33.755258] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:15.479 [2024-10-25 17:59:33.755269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:15.480 [2024-10-25 17:59:33.755277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.480 [2024-10-25 17:59:33.755286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:15.480 [2024-10-25 17:59:33.755294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:15.480 [2024-10-25 17:59:33.755303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.480 [2024-10-25 17:59:33.755394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.480 [2024-10-25 17:59:33.755405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:15.480 [2024-10-25 17:59:33.755414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:15.480 [2024-10-25 17:59:33.755423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.480 [2024-10-25 17:59:33.755526] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:15.480 [2024-10-25 17:59:33.755537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:15.480 [2024-10-25 17:59:33.755546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:15.480 [2024-10-25 17:59:33.755588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:15.480 [2024-10-25 17:59:33.755614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.480 [2024-10-25 17:59:33.755630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:15.480 [2024-10-25 17:59:33.755638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:15.480 [2024-10-25 17:59:33.755645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.480 [2024-10-25 17:59:33.755655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:15.480 [2024-10-25 17:59:33.755663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:15.480 [2024-10-25 17:59:33.755671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:15.480 [2024-10-25 17:59:33.755687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:15.480 [2024-10-25 17:59:33.755717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:15.480 [2024-10-25 17:59:33.755743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:15.480 [2024-10-25 17:59:33.755768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:15.480 [2024-10-25 17:59:33.755792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:15.480 [2024-10-25 17:59:33.755815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.480 [2024-10-25 17:59:33.755830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:15.480 [2024-10-25 17:59:33.755838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:15.480 [2024-10-25 17:59:33.755844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.480 [2024-10-25 17:59:33.755853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:15.480 [2024-10-25 17:59:33.755860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:15.480 [2024-10-25 17:59:33.755869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:15.480 [2024-10-25 17:59:33.755885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:15.480 [2024-10-25 17:59:33.755892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:15.480 [2024-10-25 17:59:33.755908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:15.480 [2024-10-25 17:59:33.755918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.480 [2024-10-25 17:59:33.755936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:15.480 [2024-10-25 17:59:33.755943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:15.480 [2024-10-25 17:59:33.755951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:15.480 [2024-10-25 17:59:33.755959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:15.480 [2024-10-25 17:59:33.755967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:15.480 [2024-10-25 17:59:33.755974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:15.480 [2024-10-25 17:59:33.755984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:15.480 [2024-10-25 17:59:33.755993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.480 [2024-10-25 17:59:33.756006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:15.480 [2024-10-25 17:59:33.756014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:15.480 [2024-10-25 17:59:33.756022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:15.480 [2024-10-25 17:59:33.756031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:15.480 [2024-10-25 17:59:33.756040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:15.480 [2024-10-25 17:59:33.756047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:15.480 [2024-10-25 17:59:33.756056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:15.480 [2024-10-25 17:59:33.756063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:15.480 [2024-10-25 17:59:33.756073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:15.480 [2024-10-25 17:59:33.756080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:15.480 [2024-10-25 17:59:33.756089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:15.480 [2024-10-25 17:59:33.756096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:15.480 [2024-10-25 17:59:33.756104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:15.480 [2024-10-25 17:59:33.756113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:15.480 [2024-10-25 17:59:33.756122] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:15.480 [2024-10-25 17:59:33.756130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.480 [2024-10-25 17:59:33.756147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:15.481 [2024-10-25 17:59:33.756155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:15.481 [2024-10-25 17:59:33.756163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:15.481 [2024-10-25 17:59:33.756178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:15.481 [2024-10-25 17:59:33.756189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.756198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:15.481 [2024-10-25 17:59:33.756207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:18:15.481 [2024-10-25 17:59:33.756214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.785116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.785175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.481 [2024-10-25 17:59:33.785191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.839 ms 00:18:15.481 [2024-10-25 17:59:33.785200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.785375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.785387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:15.481 [2024-10-25 17:59:33.785398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:15.481 [2024-10-25 17:59:33.785406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.818041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.818098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:15.481 [2024-10-25 17:59:33.818117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.606 ms 00:18:15.481 [2024-10-25 17:59:33.818125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.818249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.818259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:15.481 [2024-10-25 17:59:33.818271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:15.481 [2024-10-25 17:59:33.818278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.818722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.818739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:15.481 [2024-10-25 17:59:33.818750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:18:15.481 [2024-10-25 17:59:33.818760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.818897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.818906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:15.481 [2024-10-25 17:59:33.818918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:15.481 [2024-10-25 17:59:33.818926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.834654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.834708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:15.481 [2024-10-25 17:59:33.834723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.701 ms 00:18:15.481 [2024-10-25 17:59:33.834733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.848098] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:15.481 [2024-10-25 17:59:33.848159] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:15.481 [2024-10-25 17:59:33.848175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.848185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:15.481 [2024-10-25 17:59:33.848198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.294 ms 00:18:15.481 [2024-10-25 17:59:33.848207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.873668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.873737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:15.481 [2024-10-25 17:59:33.873755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.241 ms 00:18:15.481 [2024-10-25 17:59:33.873764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.887043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.887100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:15.481 [2024-10-25 17:59:33.887119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.130 ms 00:18:15.481 [2024-10-25 17:59:33.887127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.900015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.900073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:15.481 [2024-10-25 17:59:33.900088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.756 ms 00:18:15.481 [2024-10-25 17:59:33.900096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.481 [2024-10-25 17:59:33.900852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.481 [2024-10-25 17:59:33.900878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:15.481 [2024-10-25 17:59:33.900890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:18:15.481 [2024-10-25 17:59:33.900898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:33.974977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:33.975061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:15.740 [2024-10-25 17:59:33.975081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.040 ms 00:18:15.740 [2024-10-25 17:59:33.975090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:33.987122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:15.740 [2024-10-25 17:59:34.005085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.005155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:15.740 [2024-10-25 17:59:34.005171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.831 ms 00:18:15.740 [2024-10-25 17:59:34.005185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.005313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.005326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:15.740 [2024-10-25 17:59:34.005335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:15.740 [2024-10-25 17:59:34.005344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.005401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.005412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:15.740 [2024-10-25 17:59:34.005421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:15.740 [2024-10-25 17:59:34.005431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.005457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.005468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:15.740 [2024-10-25 17:59:34.005476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:15.740 [2024-10-25 17:59:34.005497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.005532] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:15.740 [2024-10-25 17:59:34.005545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.005552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:15.740 [2024-10-25 17:59:34.005580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:15.740 [2024-10-25 17:59:34.005587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.031515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.031586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:15.740 [2024-10-25 17:59:34.031604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.898 ms 00:18:15.740 [2024-10-25 17:59:34.031613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.031790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.740 [2024-10-25 17:59:34.031807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:15.740 [2024-10-25 17:59:34.031819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:15.740 [2024-10-25 17:59:34.031826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.740 [2024-10-25 17:59:34.032917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:15.740 [2024-10-25 17:59:34.036855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.033 ms, result 0 00:18:15.740 [2024-10-25 17:59:34.037732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:15.740 Some configs were skipped because the RPC state that can call them passed over. 00:18:15.740 17:59:34 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:15.999 [2024-10-25 17:59:34.264866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.999 [2024-10-25 17:59:34.264942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:15.999 [2024-10-25 17:59:34.264956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:18:15.999 [2024-10-25 17:59:34.264966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.999 [2024-10-25 17:59:34.265003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.385 ms, result 0 00:18:15.999 true 00:18:15.999 17:59:34 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:16.257 [2024-10-25 17:59:34.468791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.257 [2024-10-25 17:59:34.468856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:16.257 [2024-10-25 17:59:34.468873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:18:16.257 [2024-10-25 17:59:34.468880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.257 [2024-10-25 17:59:34.468920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.055 ms, result 0 00:18:16.257 true 00:18:16.257 17:59:34 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 73960 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73960 ']' 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73960 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73960 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.257 killing process with pid 73960 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73960' 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73960 00:18:16.257 17:59:34 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73960 00:18:16.823 [2024-10-25 17:59:35.225929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.225995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:16.823 [2024-10-25 17:59:35.226007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:16.823 [2024-10-25 17:59:35.226015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.226036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:16.823 [2024-10-25 17:59:35.228218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.228254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:16.823 [2024-10-25 17:59:35.228267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:18:16.823 [2024-10-25 17:59:35.228275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.228538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.228552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:16.823 [2024-10-25 17:59:35.228572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:16.823 [2024-10-25 17:59:35.228579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.231649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.231674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:16.823 [2024-10-25 17:59:35.231683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:18:16.823 [2024-10-25 17:59:35.231692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.236964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.236990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:16.823 [2024-10-25 17:59:35.237003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.241 ms 00:18:16.823 [2024-10-25 17:59:35.237009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.245132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.245167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:16.823 [2024-10-25 17:59:35.245180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.045 ms 00:18:16.823 [2024-10-25 17:59:35.245195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.251933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.251964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:16.823 [2024-10-25 17:59:35.251977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.701 ms 00:18:16.823 [2024-10-25 17:59:35.251985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.823 [2024-10-25 17:59:35.252102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.823 [2024-10-25 17:59:35.252113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:16.823 [2024-10-25 17:59:35.252122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:16.823 [2024-10-25 17:59:35.252128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.084 [2024-10-25 17:59:35.260071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.084 [2024-10-25 17:59:35.260099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.084 [2024-10-25 17:59:35.260108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.924 ms 00:18:17.084 [2024-10-25 17:59:35.260114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.084 [2024-10-25 17:59:35.267711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.084 [2024-10-25 17:59:35.267739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.084 [2024-10-25 17:59:35.267751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.563 ms 00:18:17.084 [2024-10-25 17:59:35.267757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.084 [2024-10-25 17:59:35.275004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.084 [2024-10-25 17:59:35.275032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.084 [2024-10-25 17:59:35.275041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.199 ms 00:18:17.084 [2024-10-25 17:59:35.275046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.084 [2024-10-25 17:59:35.282095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.084 [2024-10-25 17:59:35.282122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.084 [2024-10-25 17:59:35.282131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.992 ms 00:18:17.084 [2024-10-25 17:59:35.282137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.084 [2024-10-25 17:59:35.282167] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.084 [2024-10-25 17:59:35.282182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.084 [2024-10-25 17:59:35.282671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.085 [2024-10-25 17:59:35.282889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.085 [2024-10-25 17:59:35.282898] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:18:17.085 [2024-10-25 17:59:35.282914] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.085 [2024-10-25 17:59:35.282924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.085 [2024-10-25 17:59:35.282929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.085 [2024-10-25 17:59:35.282937] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.085 [2024-10-25 17:59:35.282944] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.085 [2024-10-25 17:59:35.282951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.085 [2024-10-25 17:59:35.282957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.085 [2024-10-25 17:59:35.282963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.085 [2024-10-25 17:59:35.282968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.085 [2024-10-25 17:59:35.282974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.085 [2024-10-25 17:59:35.282981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.085 [2024-10-25 17:59:35.282989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:18:17.085 [2024-10-25 17:59:35.282995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.293046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.085 [2024-10-25 17:59:35.293084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.085 [2024-10-25 17:59:35.293099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.031 ms 00:18:17.085 [2024-10-25 17:59:35.293110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.293451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.085 [2024-10-25 17:59:35.293470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.085 [2024-10-25 17:59:35.293479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:17.085 [2024-10-25 17:59:35.293494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.328900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.328952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.085 [2024-10-25 17:59:35.328964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.328971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.330064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.330094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.085 [2024-10-25 17:59:35.330104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.330113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.330172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.330181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.085 [2024-10-25 17:59:35.330192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.330198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.330215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.330223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.085 [2024-10-25 17:59:35.330231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.330237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.392925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.392984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.085 [2024-10-25 17:59:35.392998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.393004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.085 [2024-10-25 17:59:35.444321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.085 [2024-10-25 17:59:35.444461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.085 [2024-10-25 17:59:35.444508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.085 [2024-10-25 17:59:35.444629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.085 [2024-10-25 17:59:35.444686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.085 [2024-10-25 17:59:35.444746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.085 [2024-10-25 17:59:35.444801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.085 [2024-10-25 17:59:35.444809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.085 [2024-10-25 17:59:35.444816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.085 [2024-10-25 17:59:35.444940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 218.988 ms, result 0 00:18:18.020 17:59:36 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.020 [2024-10-25 17:59:36.282214] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:18.020 [2024-10-25 17:59:36.282331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74015 ] 00:18:18.020 [2024-10-25 17:59:36.437281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.278 [2024-10-25 17:59:36.554082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.535 [2024-10-25 17:59:36.828967] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.535 [2024-10-25 17:59:36.829046] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.793 [2024-10-25 17:59:36.985395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:36.985465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:18.793 [2024-10-25 17:59:36.985480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:18.793 [2024-10-25 17:59:36.985507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:36.988327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:36.988367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.793 [2024-10-25 17:59:36.988377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:18:18.793 [2024-10-25 17:59:36.988385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:36.988477] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:18.793 [2024-10-25 17:59:36.989203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:18.793 [2024-10-25 17:59:36.989229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:36.989238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.793 [2024-10-25 17:59:36.989247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:18:18.793 [2024-10-25 17:59:36.989255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:36.991051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:18.793 [2024-10-25 17:59:37.003981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.004034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:18.793 [2024-10-25 17:59:37.004054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.931 ms 00:18:18.793 [2024-10-25 17:59:37.004064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.004204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.004218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:18.793 [2024-10-25 17:59:37.004226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:18.793 [2024-10-25 17:59:37.004235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.011146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.011198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.793 [2024-10-25 17:59:37.011210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:18:18.793 [2024-10-25 17:59:37.011220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.011360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.011372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.793 [2024-10-25 17:59:37.011382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:18.793 [2024-10-25 17:59:37.011390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.011421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.011430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:18.793 [2024-10-25 17:59:37.011443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:18.793 [2024-10-25 17:59:37.011451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.011477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:18.793 [2024-10-25 17:59:37.015153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.015187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.793 [2024-10-25 17:59:37.015197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:18:18.793 [2024-10-25 17:59:37.015205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.015262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.793 [2024-10-25 17:59:37.015272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:18.793 [2024-10-25 17:59:37.015282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:18.793 [2024-10-25 17:59:37.015289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.793 [2024-10-25 17:59:37.015321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:18.793 [2024-10-25 17:59:37.015344] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:18.793 [2024-10-25 17:59:37.015385] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:18.793 [2024-10-25 17:59:37.015401] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:18.793 [2024-10-25 17:59:37.015506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:18.793 [2024-10-25 17:59:37.015518] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:18.793 [2024-10-25 17:59:37.015529] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:18.793 [2024-10-25 17:59:37.015540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:18.793 [2024-10-25 17:59:37.015549] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:18.793 [2024-10-25 17:59:37.015572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:18.794 [2024-10-25 17:59:37.015581] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:18.794 [2024-10-25 17:59:37.015589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:18.794 [2024-10-25 17:59:37.015597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:18.794 [2024-10-25 17:59:37.015605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.015612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:18.794 [2024-10-25 17:59:37.015621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:18:18.794 [2024-10-25 17:59:37.015629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.794 [2024-10-25 17:59:37.015719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.015728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:18.794 [2024-10-25 17:59:37.015736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:18.794 [2024-10-25 17:59:37.015746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.794 [2024-10-25 17:59:37.015847] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:18.794 [2024-10-25 17:59:37.015858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:18.794 [2024-10-25 17:59:37.015866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.794 [2024-10-25 17:59:37.015875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.015884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:18.794 [2024-10-25 17:59:37.015891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.015897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:18.794 [2024-10-25 17:59:37.015904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:18.794 [2024-10-25 17:59:37.015912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:18.794 [2024-10-25 17:59:37.015919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.794 [2024-10-25 17:59:37.015926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:18.794 [2024-10-25 17:59:37.015932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:18.794 [2024-10-25 17:59:37.015939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.794 [2024-10-25 17:59:37.015954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:18.794 [2024-10-25 17:59:37.015961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:18.794 [2024-10-25 17:59:37.015968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.015974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:18.794 [2024-10-25 17:59:37.015980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:18.794 [2024-10-25 17:59:37.015987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.015994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:18.794 [2024-10-25 17:59:37.016000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.794 [2024-10-25 17:59:37.016014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:18.794 [2024-10-25 17:59:37.016023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.794 [2024-10-25 17:59:37.016039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:18.794 [2024-10-25 17:59:37.016047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.794 [2024-10-25 17:59:37.016060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:18.794 [2024-10-25 17:59:37.016066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.794 [2024-10-25 17:59:37.016079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:18.794 [2024-10-25 17:59:37.016086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.794 [2024-10-25 17:59:37.016100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:18.794 [2024-10-25 17:59:37.016107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:18.794 [2024-10-25 17:59:37.016113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.794 [2024-10-25 17:59:37.016120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:18.794 [2024-10-25 17:59:37.016126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:18.794 [2024-10-25 17:59:37.016133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:18.794 [2024-10-25 17:59:37.016146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:18.794 [2024-10-25 17:59:37.016152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:18.794 [2024-10-25 17:59:37.016166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:18.794 [2024-10-25 17:59:37.016173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.794 [2024-10-25 17:59:37.016181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.794 [2024-10-25 17:59:37.016191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:18.794 [2024-10-25 17:59:37.016197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:18.794 [2024-10-25 17:59:37.016204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:18.794 [2024-10-25 17:59:37.016211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:18.794 [2024-10-25 17:59:37.016217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:18.794 [2024-10-25 17:59:37.016224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:18.794 [2024-10-25 17:59:37.016233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:18.794 [2024-10-25 17:59:37.016242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:18.794 [2024-10-25 17:59:37.016258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:18.794 [2024-10-25 17:59:37.016265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:18.794 [2024-10-25 17:59:37.016274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:18.794 [2024-10-25 17:59:37.016289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:18.794 [2024-10-25 17:59:37.016296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:18.794 [2024-10-25 17:59:37.016303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:18.794 [2024-10-25 17:59:37.016310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:18.794 [2024-10-25 17:59:37.016317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:18.794 [2024-10-25 17:59:37.016324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:18.794 [2024-10-25 17:59:37.016359] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:18.794 [2024-10-25 17:59:37.016367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:18.794 [2024-10-25 17:59:37.016383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:18.794 [2024-10-25 17:59:37.016391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:18.794 [2024-10-25 17:59:37.016398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:18.794 [2024-10-25 17:59:37.016406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.016413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:18.794 [2024-10-25 17:59:37.016421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:18:18.794 [2024-10-25 17:59:37.016432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.794 [2024-10-25 17:59:37.045277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.045334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.794 [2024-10-25 17:59:37.045348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.790 ms 00:18:18.794 [2024-10-25 17:59:37.045356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.794 [2024-10-25 17:59:37.045545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.045570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.794 [2024-10-25 17:59:37.045585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:18.794 [2024-10-25 17:59:37.045594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.794 [2024-10-25 17:59:37.088336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.088401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.794 [2024-10-25 17:59:37.088417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.716 ms 00:18:18.794 [2024-10-25 17:59:37.088426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.794 [2024-10-25 17:59:37.088603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.794 [2024-10-25 17:59:37.088616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.794 [2024-10-25 17:59:37.088626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:18.794 [2024-10-25 17:59:37.088633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.089052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.089071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.795 [2024-10-25 17:59:37.089080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:18:18.795 [2024-10-25 17:59:37.089088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.089240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.089250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.795 [2024-10-25 17:59:37.089258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:18.795 [2024-10-25 17:59:37.089266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.103997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.104049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.795 [2024-10-25 17:59:37.104061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.708 ms 00:18:18.795 [2024-10-25 17:59:37.104070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.117563] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:18.795 [2024-10-25 17:59:37.117618] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:18.795 [2024-10-25 17:59:37.117632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.117641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:18.795 [2024-10-25 17:59:37.117654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.408 ms 00:18:18.795 [2024-10-25 17:59:37.117662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.143123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.143200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:18.795 [2024-10-25 17:59:37.143214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.327 ms 00:18:18.795 [2024-10-25 17:59:37.143223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.156526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.156584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:18.795 [2024-10-25 17:59:37.156598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.154 ms 00:18:18.795 [2024-10-25 17:59:37.156605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.168601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.168652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:18.795 [2024-10-25 17:59:37.168666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.877 ms 00:18:18.795 [2024-10-25 17:59:37.168674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.795 [2024-10-25 17:59:37.169373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.795 [2024-10-25 17:59:37.169400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.795 [2024-10-25 17:59:37.169410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:18.795 [2024-10-25 17:59:37.169418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.052 [2024-10-25 17:59:37.229942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.052 [2024-10-25 17:59:37.230010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:19.053 [2024-10-25 17:59:37.230027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.495 ms 00:18:19.053 [2024-10-25 17:59:37.230036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.241580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:19.053 [2024-10-25 17:59:37.259151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.259207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:19.053 [2024-10-25 17:59:37.259221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.961 ms 00:18:19.053 [2024-10-25 17:59:37.259231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.259354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.259370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:19.053 [2024-10-25 17:59:37.259380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:19.053 [2024-10-25 17:59:37.259388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.259446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.259456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:19.053 [2024-10-25 17:59:37.259465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:19.053 [2024-10-25 17:59:37.259473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.259500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.259508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:19.053 [2024-10-25 17:59:37.259519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:19.053 [2024-10-25 17:59:37.259527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.259583] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:19.053 [2024-10-25 17:59:37.259594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.259603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:19.053 [2024-10-25 17:59:37.259612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:19.053 [2024-10-25 17:59:37.259619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.284721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.284787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:19.053 [2024-10-25 17:59:37.284801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.079 ms 00:18:19.053 [2024-10-25 17:59:37.284810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.284942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.053 [2024-10-25 17:59:37.284954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:19.053 [2024-10-25 17:59:37.284963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:19.053 [2024-10-25 17:59:37.284971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.053 [2024-10-25 17:59:37.285931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.053 [2024-10-25 17:59:37.289161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.219 ms, result 0 00:18:19.053 [2024-10-25 17:59:37.289918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.053 [2024-10-25 17:59:37.302874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.984  [2024-10-25T17:59:39.793Z] Copying: 45/256 [MB] (45 MBps) [2024-10-25T17:59:40.726Z] Copying: 90/256 [MB] (44 MBps) [2024-10-25T17:59:41.672Z] Copying: 134/256 [MB] (44 MBps) [2024-10-25T17:59:42.606Z] Copying: 178/256 [MB] (44 MBps) [2024-10-25T17:59:43.547Z] Copying: 220/256 [MB] (41 MBps) [2024-10-25T17:59:44.119Z] Copying: 256/256 [MB] (average 43 MBps)[2024-10-25 17:59:43.835841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:25.684 [2024-10-25 17:59:43.846994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.847034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:25.684 [2024-10-25 17:59:43.847049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:25.684 [2024-10-25 17:59:43.847057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.847079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:25.684 [2024-10-25 17:59:43.849710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.849746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:25.684 [2024-10-25 17:59:43.849756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:18:25.684 [2024-10-25 17:59:43.849764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.850029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.850038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:25.684 [2024-10-25 17:59:43.850046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:25.684 [2024-10-25 17:59:43.850053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.854429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.854452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:25.684 [2024-10-25 17:59:43.854465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:18:25.684 [2024-10-25 17:59:43.854473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.861677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.861712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:25.684 [2024-10-25 17:59:43.861722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.186 ms 00:18:25.684 [2024-10-25 17:59:43.861729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.885513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.885544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:25.684 [2024-10-25 17:59:43.885568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.726 ms 00:18:25.684 [2024-10-25 17:59:43.885576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.899720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.899750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:25.684 [2024-10-25 17:59:43.899765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.108 ms 00:18:25.684 [2024-10-25 17:59:43.899773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.899910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.899920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:25.684 [2024-10-25 17:59:43.899928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:25.684 [2024-10-25 17:59:43.899935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.923439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.923474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:25.684 [2024-10-25 17:59:43.923484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.479 ms 00:18:25.684 [2024-10-25 17:59:43.923491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.947064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.947102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:25.684 [2024-10-25 17:59:43.947113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.535 ms 00:18:25.684 [2024-10-25 17:59:43.947120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.970112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.970152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:25.684 [2024-10-25 17:59:43.970163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.952 ms 00:18:25.684 [2024-10-25 17:59:43.970170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.993262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.684 [2024-10-25 17:59:43.993293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:25.684 [2024-10-25 17:59:43.993304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.025 ms 00:18:25.684 [2024-10-25 17:59:43.993312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.684 [2024-10-25 17:59:43.993346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:25.684 [2024-10-25 17:59:43.993366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:25.684 [2024-10-25 17:59:43.993675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.993999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:25.685 [2024-10-25 17:59:43.994162] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:25.685 [2024-10-25 17:59:43.994169] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12f9a83d-3b9c-43be-8d23-b592e8419cb1 00:18:25.685 [2024-10-25 17:59:43.994177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:25.685 [2024-10-25 17:59:43.994184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:25.685 [2024-10-25 17:59:43.994191] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:25.685 [2024-10-25 17:59:43.994198] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:25.685 [2024-10-25 17:59:43.994205] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:25.685 [2024-10-25 17:59:43.994213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:25.685 [2024-10-25 17:59:43.994220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:25.685 [2024-10-25 17:59:43.994226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:25.685 [2024-10-25 17:59:43.994232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:25.685 [2024-10-25 17:59:43.994239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-10-25 17:59:43.994247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:25.685 [2024-10-25 17:59:43.994255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:18:25.685 [2024-10-25 17:59:43.994264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-10-25 17:59:44.006523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-10-25 17:59:44.006663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:25.685 [2024-10-25 17:59:44.006679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:18:25.685 [2024-10-25 17:59:44.006688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-10-25 17:59:44.007041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-10-25 17:59:44.007061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:25.685 [2024-10-25 17:59:44.007071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:18:25.685 [2024-10-25 17:59:44.007078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-10-25 17:59:44.041788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.685 [2024-10-25 17:59:44.041904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.685 [2024-10-25 17:59:44.041920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.685 [2024-10-25 17:59:44.041928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-10-25 17:59:44.042019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.685 [2024-10-25 17:59:44.042031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.685 [2024-10-25 17:59:44.042039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.685 [2024-10-25 17:59:44.042046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-10-25 17:59:44.042091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.685 [2024-10-25 17:59:44.042100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.685 [2024-10-25 17:59:44.042108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.685 [2024-10-25 17:59:44.042116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-10-25 17:59:44.042132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.685 [2024-10-25 17:59:44.042140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.685 [2024-10-25 17:59:44.042150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.685 [2024-10-25 17:59:44.042157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.118867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.944 [2024-10-25 17:59:44.118913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.944 [2024-10-25 17:59:44.118924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.944 [2024-10-25 17:59:44.118932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.181167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.944 [2024-10-25 17:59:44.181214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.944 [2024-10-25 17:59:44.181229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.944 [2024-10-25 17:59:44.181237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.181303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.944 [2024-10-25 17:59:44.181312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.944 [2024-10-25 17:59:44.181320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.944 [2024-10-25 17:59:44.181327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.181355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.944 [2024-10-25 17:59:44.181364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.944 [2024-10-25 17:59:44.181371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.944 [2024-10-25 17:59:44.181379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.181467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.944 [2024-10-25 17:59:44.181476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.944 [2024-10-25 17:59:44.181484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.944 [2024-10-25 17:59:44.181492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.181529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.944 [2024-10-25 17:59:44.181538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:25.944 [2024-10-25 17:59:44.181545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.944 [2024-10-25 17:59:44.181553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.944 [2024-10-25 17:59:44.181612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.945 [2024-10-25 17:59:44.181621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.945 [2024-10-25 17:59:44.181629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.945 [2024-10-25 17:59:44.181637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.945 [2024-10-25 17:59:44.181677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.945 [2024-10-25 17:59:44.181686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.945 [2024-10-25 17:59:44.181694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.945 [2024-10-25 17:59:44.181701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.945 [2024-10-25 17:59:44.181835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.841 ms, result 0 00:18:26.510 00:18:26.510 00:18:26.510 17:59:44 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:27.076 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:27.076 17:59:45 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 73960 00:18:27.076 17:59:45 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73960 ']' 00:18:27.076 17:59:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73960 00:18:27.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (73960) - No such process 00:18:27.076 Process with pid 73960 is not found 00:18:27.076 17:59:45 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 73960 is not found' 00:18:27.076 ************************************ 00:18:27.076 END TEST ftl_trim 00:18:27.076 ************************************ 00:18:27.076 00:18:27.076 real 0m57.177s 00:18:27.076 user 1m10.315s 00:18:27.076 sys 0m29.553s 00:18:27.076 17:59:45 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:27.076 17:59:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:27.335 17:59:45 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:27.335 17:59:45 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:27.335 17:59:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:27.335 17:59:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:27.335 ************************************ 00:18:27.335 START TEST ftl_restore 00:18:27.335 ************************************ 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:27.335 * Looking for test storage... 00:18:27.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lcov --version 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.335 17:59:45 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.335 --rc genhtml_branch_coverage=1 00:18:27.335 --rc genhtml_function_coverage=1 00:18:27.335 --rc genhtml_legend=1 00:18:27.335 --rc geninfo_all_blocks=1 00:18:27.335 --rc geninfo_unexecuted_blocks=1 00:18:27.335 00:18:27.335 ' 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.335 --rc genhtml_branch_coverage=1 00:18:27.335 --rc genhtml_function_coverage=1 00:18:27.335 --rc genhtml_legend=1 00:18:27.335 --rc geninfo_all_blocks=1 00:18:27.335 --rc geninfo_unexecuted_blocks=1 00:18:27.335 00:18:27.335 ' 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.335 --rc genhtml_branch_coverage=1 00:18:27.335 --rc genhtml_function_coverage=1 00:18:27.335 --rc genhtml_legend=1 00:18:27.335 --rc geninfo_all_blocks=1 00:18:27.335 --rc geninfo_unexecuted_blocks=1 00:18:27.335 00:18:27.335 ' 00:18:27.335 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:27.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.335 --rc genhtml_branch_coverage=1 00:18:27.335 --rc genhtml_function_coverage=1 00:18:27.335 --rc genhtml_legend=1 00:18:27.335 --rc geninfo_all_blocks=1 00:18:27.335 --rc geninfo_unexecuted_blocks=1 00:18:27.335 00:18:27.335 ' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.7EquneiPNG 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:27.335 17:59:45 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:27.336 17:59:45 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:27.336 17:59:45 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74181 00:18:27.336 17:59:45 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74181 00:18:27.336 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74181 ']' 00:18:27.336 17:59:45 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.336 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.336 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.336 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.336 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.336 17:59:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:27.336 [2024-10-25 17:59:45.766633] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:27.336 [2024-10-25 17:59:45.766928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74181 ] 00:18:27.593 [2024-10-25 17:59:45.925468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.594 [2024-10-25 17:59:46.023036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.194 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.194 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:28.194 17:59:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:28.194 17:59:46 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:28.194 17:59:46 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:28.194 17:59:46 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:28.194 17:59:46 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:28.194 17:59:46 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:28.452 17:59:46 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:28.452 17:59:46 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:28.710 17:59:46 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:28.710 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:28.710 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:28.710 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:28.710 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:28.710 17:59:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:28.710 { 00:18:28.710 "name": "nvme0n1", 00:18:28.710 "aliases": [ 00:18:28.710 "2d55f383-bc53-4428-9316-bd5bb304370b" 00:18:28.710 ], 00:18:28.710 "product_name": "NVMe disk", 00:18:28.710 "block_size": 4096, 00:18:28.710 "num_blocks": 1310720, 00:18:28.710 "uuid": "2d55f383-bc53-4428-9316-bd5bb304370b", 00:18:28.710 "numa_id": -1, 00:18:28.710 "assigned_rate_limits": { 00:18:28.710 "rw_ios_per_sec": 0, 00:18:28.710 "rw_mbytes_per_sec": 0, 00:18:28.710 "r_mbytes_per_sec": 0, 00:18:28.710 "w_mbytes_per_sec": 0 00:18:28.710 }, 00:18:28.710 "claimed": true, 00:18:28.710 "claim_type": "read_many_write_one", 00:18:28.710 "zoned": false, 00:18:28.710 "supported_io_types": { 00:18:28.710 "read": true, 00:18:28.710 "write": true, 00:18:28.710 "unmap": true, 00:18:28.710 "flush": true, 00:18:28.710 "reset": true, 00:18:28.710 "nvme_admin": true, 00:18:28.710 "nvme_io": true, 00:18:28.710 "nvme_io_md": false, 00:18:28.710 "write_zeroes": true, 00:18:28.710 "zcopy": false, 00:18:28.710 "get_zone_info": false, 00:18:28.710 "zone_management": false, 00:18:28.710 "zone_append": false, 00:18:28.710 "compare": true, 00:18:28.710 "compare_and_write": false, 00:18:28.710 "abort": true, 00:18:28.710 "seek_hole": false, 00:18:28.710 "seek_data": false, 00:18:28.710 "copy": true, 00:18:28.710 "nvme_iov_md": false 00:18:28.710 }, 00:18:28.710 "driver_specific": { 00:18:28.710 "nvme": [ 00:18:28.710 { 00:18:28.710 "pci_address": "0000:00:11.0", 00:18:28.710 "trid": { 00:18:28.710 "trtype": "PCIe", 00:18:28.710 "traddr": "0000:00:11.0" 00:18:28.710 }, 00:18:28.710 "ctrlr_data": { 00:18:28.710 "cntlid": 0, 00:18:28.710 "vendor_id": "0x1b36", 00:18:28.710 "model_number": "QEMU NVMe Ctrl", 00:18:28.710 "serial_number": "12341", 00:18:28.710 "firmware_revision": "8.0.0", 00:18:28.710 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:28.710 "oacs": { 00:18:28.710 "security": 0, 00:18:28.710 "format": 1, 00:18:28.710 "firmware": 0, 00:18:28.710 "ns_manage": 1 00:18:28.710 }, 00:18:28.710 "multi_ctrlr": false, 00:18:28.710 "ana_reporting": false 00:18:28.710 }, 00:18:28.710 "vs": { 00:18:28.710 "nvme_version": "1.4" 00:18:28.710 }, 00:18:28.710 "ns_data": { 00:18:28.710 "id": 1, 00:18:28.710 "can_share": false 00:18:28.710 } 00:18:28.710 } 00:18:28.710 ], 00:18:28.710 "mp_policy": "active_passive" 00:18:28.710 } 00:18:28.710 } 00:18:28.710 ]' 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:28.710 17:59:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:28.710 17:59:47 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:28.710 17:59:47 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:28.710 17:59:47 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:28.710 17:59:47 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:28.710 17:59:47 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:28.968 17:59:47 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e4759a77-fe86-438a-8bdd-6e445396c7c4 00:18:28.968 17:59:47 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:28.968 17:59:47 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4759a77-fe86-438a-8bdd-6e445396c7c4 00:18:29.225 17:59:47 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:29.483 17:59:47 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=867d4a25-516d-4326-8393-271889abf794 00:18:29.483 17:59:47 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 867d4a25-516d-4326-8393-271889abf794 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:29.741 17:59:48 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:29.741 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:29.741 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:29.741 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:29.741 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:29.741 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:29.998 { 00:18:29.998 "name": "fae458ed-5b83-46ce-a61b-b0fcbaafd62d", 00:18:29.998 "aliases": [ 00:18:29.998 "lvs/nvme0n1p0" 00:18:29.998 ], 00:18:29.998 "product_name": "Logical Volume", 00:18:29.998 "block_size": 4096, 00:18:29.998 "num_blocks": 26476544, 00:18:29.998 "uuid": "fae458ed-5b83-46ce-a61b-b0fcbaafd62d", 00:18:29.998 "assigned_rate_limits": { 00:18:29.998 "rw_ios_per_sec": 0, 00:18:29.998 "rw_mbytes_per_sec": 0, 00:18:29.998 "r_mbytes_per_sec": 0, 00:18:29.998 "w_mbytes_per_sec": 0 00:18:29.998 }, 00:18:29.998 "claimed": false, 00:18:29.998 "zoned": false, 00:18:29.998 "supported_io_types": { 00:18:29.998 "read": true, 00:18:29.998 "write": true, 00:18:29.998 "unmap": true, 00:18:29.998 "flush": false, 00:18:29.998 "reset": true, 00:18:29.998 "nvme_admin": false, 00:18:29.998 "nvme_io": false, 00:18:29.998 "nvme_io_md": false, 00:18:29.998 "write_zeroes": true, 00:18:29.998 "zcopy": false, 00:18:29.998 "get_zone_info": false, 00:18:29.998 "zone_management": false, 00:18:29.998 "zone_append": false, 00:18:29.998 "compare": false, 00:18:29.998 "compare_and_write": false, 00:18:29.998 "abort": false, 00:18:29.998 "seek_hole": true, 00:18:29.998 "seek_data": true, 00:18:29.998 "copy": false, 00:18:29.998 "nvme_iov_md": false 00:18:29.998 }, 00:18:29.998 "driver_specific": { 00:18:29.998 "lvol": { 00:18:29.998 "lvol_store_uuid": "867d4a25-516d-4326-8393-271889abf794", 00:18:29.998 "base_bdev": "nvme0n1", 00:18:29.998 "thin_provision": true, 00:18:29.998 "num_allocated_clusters": 0, 00:18:29.998 "snapshot": false, 00:18:29.998 "clone": false, 00:18:29.998 "esnap_clone": false 00:18:29.998 } 00:18:29.998 } 00:18:29.998 } 00:18:29.998 ]' 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:29.998 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:29.998 17:59:48 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:29.998 17:59:48 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:29.999 17:59:48 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:30.256 17:59:48 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:30.256 17:59:48 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:30.256 17:59:48 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:30.256 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:30.256 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:30.256 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:30.256 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:30.256 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:30.513 { 00:18:30.513 "name": "fae458ed-5b83-46ce-a61b-b0fcbaafd62d", 00:18:30.513 "aliases": [ 00:18:30.513 "lvs/nvme0n1p0" 00:18:30.513 ], 00:18:30.513 "product_name": "Logical Volume", 00:18:30.513 "block_size": 4096, 00:18:30.513 "num_blocks": 26476544, 00:18:30.513 "uuid": "fae458ed-5b83-46ce-a61b-b0fcbaafd62d", 00:18:30.513 "assigned_rate_limits": { 00:18:30.513 "rw_ios_per_sec": 0, 00:18:30.513 "rw_mbytes_per_sec": 0, 00:18:30.513 "r_mbytes_per_sec": 0, 00:18:30.513 "w_mbytes_per_sec": 0 00:18:30.513 }, 00:18:30.513 "claimed": false, 00:18:30.513 "zoned": false, 00:18:30.513 "supported_io_types": { 00:18:30.513 "read": true, 00:18:30.513 "write": true, 00:18:30.513 "unmap": true, 00:18:30.513 "flush": false, 00:18:30.513 "reset": true, 00:18:30.513 "nvme_admin": false, 00:18:30.513 "nvme_io": false, 00:18:30.513 "nvme_io_md": false, 00:18:30.513 "write_zeroes": true, 00:18:30.513 "zcopy": false, 00:18:30.513 "get_zone_info": false, 00:18:30.513 "zone_management": false, 00:18:30.513 "zone_append": false, 00:18:30.513 "compare": false, 00:18:30.513 "compare_and_write": false, 00:18:30.513 "abort": false, 00:18:30.513 "seek_hole": true, 00:18:30.513 "seek_data": true, 00:18:30.513 "copy": false, 00:18:30.513 "nvme_iov_md": false 00:18:30.513 }, 00:18:30.513 "driver_specific": { 00:18:30.513 "lvol": { 00:18:30.513 "lvol_store_uuid": "867d4a25-516d-4326-8393-271889abf794", 00:18:30.513 "base_bdev": "nvme0n1", 00:18:30.513 "thin_provision": true, 00:18:30.513 "num_allocated_clusters": 0, 00:18:30.513 "snapshot": false, 00:18:30.513 "clone": false, 00:18:30.513 "esnap_clone": false 00:18:30.513 } 00:18:30.513 } 00:18:30.513 } 00:18:30.513 ]' 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:30.513 17:59:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:30.513 17:59:48 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:30.513 17:59:48 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:30.771 17:59:49 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:30.771 17:59:49 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:30.771 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:30.771 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:30.771 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:30.771 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:30.771 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fae458ed-5b83-46ce-a61b-b0fcbaafd62d 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:31.029 { 00:18:31.029 "name": "fae458ed-5b83-46ce-a61b-b0fcbaafd62d", 00:18:31.029 "aliases": [ 00:18:31.029 "lvs/nvme0n1p0" 00:18:31.029 ], 00:18:31.029 "product_name": "Logical Volume", 00:18:31.029 "block_size": 4096, 00:18:31.029 "num_blocks": 26476544, 00:18:31.029 "uuid": "fae458ed-5b83-46ce-a61b-b0fcbaafd62d", 00:18:31.029 "assigned_rate_limits": { 00:18:31.029 "rw_ios_per_sec": 0, 00:18:31.029 "rw_mbytes_per_sec": 0, 00:18:31.029 "r_mbytes_per_sec": 0, 00:18:31.029 "w_mbytes_per_sec": 0 00:18:31.029 }, 00:18:31.029 "claimed": false, 00:18:31.029 "zoned": false, 00:18:31.029 "supported_io_types": { 00:18:31.029 "read": true, 00:18:31.029 "write": true, 00:18:31.029 "unmap": true, 00:18:31.029 "flush": false, 00:18:31.029 "reset": true, 00:18:31.029 "nvme_admin": false, 00:18:31.029 "nvme_io": false, 00:18:31.029 "nvme_io_md": false, 00:18:31.029 "write_zeroes": true, 00:18:31.029 "zcopy": false, 00:18:31.029 "get_zone_info": false, 00:18:31.029 "zone_management": false, 00:18:31.029 "zone_append": false, 00:18:31.029 "compare": false, 00:18:31.029 "compare_and_write": false, 00:18:31.029 "abort": false, 00:18:31.029 "seek_hole": true, 00:18:31.029 "seek_data": true, 00:18:31.029 "copy": false, 00:18:31.029 "nvme_iov_md": false 00:18:31.029 }, 00:18:31.029 "driver_specific": { 00:18:31.029 "lvol": { 00:18:31.029 "lvol_store_uuid": "867d4a25-516d-4326-8393-271889abf794", 00:18:31.029 "base_bdev": "nvme0n1", 00:18:31.029 "thin_provision": true, 00:18:31.029 "num_allocated_clusters": 0, 00:18:31.029 "snapshot": false, 00:18:31.029 "clone": false, 00:18:31.029 "esnap_clone": false 00:18:31.029 } 00:18:31.029 } 00:18:31.029 } 00:18:31.029 ]' 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:31.029 17:59:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fae458ed-5b83-46ce-a61b-b0fcbaafd62d --l2p_dram_limit 10' 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:31.029 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:31.029 17:59:49 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fae458ed-5b83-46ce-a61b-b0fcbaafd62d --l2p_dram_limit 10 -c nvc0n1p0 00:18:31.288 [2024-10-25 17:59:49.463732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.463778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:31.288 [2024-10-25 17:59:49.463791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:31.288 [2024-10-25 17:59:49.463800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.463858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.463867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.288 [2024-10-25 17:59:49.463875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:31.288 [2024-10-25 17:59:49.463881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.463902] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:31.288 [2024-10-25 17:59:49.464524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:31.288 [2024-10-25 17:59:49.464543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.464549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.288 [2024-10-25 17:59:49.464566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:18:31.288 [2024-10-25 17:59:49.464572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.464601] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f2aaecaf-110f-466b-a18e-9eabbbdbe30d 00:18:31.288 [2024-10-25 17:59:49.465576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.465604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:31.288 [2024-10-25 17:59:49.465612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:31.288 [2024-10-25 17:59:49.465621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.470455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.470485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.288 [2024-10-25 17:59:49.470494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:18:31.288 [2024-10-25 17:59:49.470503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.470587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.470596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.288 [2024-10-25 17:59:49.470603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:31.288 [2024-10-25 17:59:49.470613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.470650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.470659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:31.288 [2024-10-25 17:59:49.470665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:31.288 [2024-10-25 17:59:49.470672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.470690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.288 [2024-10-25 17:59:49.473591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.473617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.288 [2024-10-25 17:59:49.473627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:18:31.288 [2024-10-25 17:59:49.473637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.473665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-10-25 17:59:49.473671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:31.288 [2024-10-25 17:59:49.473679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:31.288 [2024-10-25 17:59:49.473685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-10-25 17:59:49.473705] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:31.289 [2024-10-25 17:59:49.473818] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:31.289 [2024-10-25 17:59:49.473833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:31.289 [2024-10-25 17:59:49.473842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:31.289 [2024-10-25 17:59:49.473852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:31.289 [2024-10-25 17:59:49.473859] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:31.289 [2024-10-25 17:59:49.473866] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:31.289 [2024-10-25 17:59:49.473872] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:31.289 [2024-10-25 17:59:49.473879] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:31.289 [2024-10-25 17:59:49.473884] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:31.289 [2024-10-25 17:59:49.473893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.289 [2024-10-25 17:59:49.473899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:31.289 [2024-10-25 17:59:49.473906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:18:31.289 [2024-10-25 17:59:49.473916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.289 [2024-10-25 17:59:49.473984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.289 [2024-10-25 17:59:49.473990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:31.289 [2024-10-25 17:59:49.473997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:31.289 [2024-10-25 17:59:49.474002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.289 [2024-10-25 17:59:49.474079] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:31.289 [2024-10-25 17:59:49.474088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:31.289 [2024-10-25 17:59:49.474096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:31.289 [2024-10-25 17:59:49.474115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:31.289 [2024-10-25 17:59:49.474133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.289 [2024-10-25 17:59:49.474145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:31.289 [2024-10-25 17:59:49.474151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:31.289 [2024-10-25 17:59:49.474158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.289 [2024-10-25 17:59:49.474163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:31.289 [2024-10-25 17:59:49.474169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:31.289 [2024-10-25 17:59:49.474174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:31.289 [2024-10-25 17:59:49.474187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:31.289 [2024-10-25 17:59:49.474207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:31.289 [2024-10-25 17:59:49.474224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:31.289 [2024-10-25 17:59:49.474243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:31.289 [2024-10-25 17:59:49.474259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:31.289 [2024-10-25 17:59:49.474278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.289 [2024-10-25 17:59:49.474290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:31.289 [2024-10-25 17:59:49.474295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:31.289 [2024-10-25 17:59:49.474301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.289 [2024-10-25 17:59:49.474307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:31.289 [2024-10-25 17:59:49.474313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:31.289 [2024-10-25 17:59:49.474318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:31.289 [2024-10-25 17:59:49.474330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:31.289 [2024-10-25 17:59:49.474336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474341] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:31.289 [2024-10-25 17:59:49.474349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:31.289 [2024-10-25 17:59:49.474354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.289 [2024-10-25 17:59:49.474367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:31.289 [2024-10-25 17:59:49.474375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:31.289 [2024-10-25 17:59:49.474381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:31.289 [2024-10-25 17:59:49.474387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:31.289 [2024-10-25 17:59:49.474392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:31.289 [2024-10-25 17:59:49.474400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:31.289 [2024-10-25 17:59:49.474408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:31.289 [2024-10-25 17:59:49.474417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:31.289 [2024-10-25 17:59:49.474430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:31.289 [2024-10-25 17:59:49.474436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:31.289 [2024-10-25 17:59:49.474443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:31.289 [2024-10-25 17:59:49.474448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:31.289 [2024-10-25 17:59:49.474455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:31.289 [2024-10-25 17:59:49.474460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:31.289 [2024-10-25 17:59:49.474468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:31.289 [2024-10-25 17:59:49.474473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:31.289 [2024-10-25 17:59:49.474481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:31.289 [2024-10-25 17:59:49.474512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:31.289 [2024-10-25 17:59:49.474519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:31.289 [2024-10-25 17:59:49.474535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:31.289 [2024-10-25 17:59:49.474541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.289 [2024-10-25 17:59:49.474548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.289 [2024-10-25 17:59:49.474569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.289 [2024-10-25 17:59:49.474577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.289 [2024-10-25 17:59:49.474583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:18:31.289 [2024-10-25 17:59:49.474591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.289 [2024-10-25 17:59:49.474633] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:31.289 [2024-10-25 17:59:49.474644] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:33.188 [2024-10-25 17:59:51.376176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.376395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:33.188 [2024-10-25 17:59:51.376416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1901.534 ms 00:18:33.188 [2024-10-25 17:59:51.376427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.401533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.401589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.188 [2024-10-25 17:59:51.401601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.889 ms 00:18:33.188 [2024-10-25 17:59:51.401611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.401731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.401743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.188 [2024-10-25 17:59:51.401751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:33.188 [2024-10-25 17:59:51.401762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.431802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.431838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.188 [2024-10-25 17:59:51.431848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.007 ms 00:18:33.188 [2024-10-25 17:59:51.431857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.431885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.431895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.188 [2024-10-25 17:59:51.431904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.188 [2024-10-25 17:59:51.431915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.432249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.432267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.188 [2024-10-25 17:59:51.432275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:33.188 [2024-10-25 17:59:51.432284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.432382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.432392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.188 [2024-10-25 17:59:51.432400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:33.188 [2024-10-25 17:59:51.432411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.446230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.446262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.188 [2024-10-25 17:59:51.446272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.801 ms 00:18:33.188 [2024-10-25 17:59:51.446283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.457440] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:33.188 [2024-10-25 17:59:51.460060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.460090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.188 [2024-10-25 17:59:51.460102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.712 ms 00:18:33.188 [2024-10-25 17:59:51.460109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.520575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.520619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:33.188 [2024-10-25 17:59:51.520634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.440 ms 00:18:33.188 [2024-10-25 17:59:51.520642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.520815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.520826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.188 [2024-10-25 17:59:51.520838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:33.188 [2024-10-25 17:59:51.520848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.543497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.543529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:33.188 [2024-10-25 17:59:51.543541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.606 ms 00:18:33.188 [2024-10-25 17:59:51.543549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.565604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.565751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:33.188 [2024-10-25 17:59:51.565771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.006 ms 00:18:33.188 [2024-10-25 17:59:51.565778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.188 [2024-10-25 17:59:51.566327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.188 [2024-10-25 17:59:51.566344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.188 [2024-10-25 17:59:51.566354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:18:33.188 [2024-10-25 17:59:51.566361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.632133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.447 [2024-10-25 17:59:51.632164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:33.447 [2024-10-25 17:59:51.632179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.724 ms 00:18:33.447 [2024-10-25 17:59:51.632187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.655791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.447 [2024-10-25 17:59:51.655918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:33.447 [2024-10-25 17:59:51.655940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.536 ms 00:18:33.447 [2024-10-25 17:59:51.655948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.678157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.447 [2024-10-25 17:59:51.678188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:33.447 [2024-10-25 17:59:51.678200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.175 ms 00:18:33.447 [2024-10-25 17:59:51.678207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.700672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.447 [2024-10-25 17:59:51.700701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.447 [2024-10-25 17:59:51.700713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.429 ms 00:18:33.447 [2024-10-25 17:59:51.700721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.700848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.447 [2024-10-25 17:59:51.700856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.447 [2024-10-25 17:59:51.700868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.447 [2024-10-25 17:59:51.700876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.700948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.447 [2024-10-25 17:59:51.700958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.447 [2024-10-25 17:59:51.700967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:33.447 [2024-10-25 17:59:51.700974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.447 [2024-10-25 17:59:51.701835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2237.691 ms, result 0 00:18:33.447 { 00:18:33.447 "name": "ftl0", 00:18:33.447 "uuid": "f2aaecaf-110f-466b-a18e-9eabbbdbe30d" 00:18:33.447 } 00:18:33.447 17:59:51 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:33.447 17:59:51 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:33.705 17:59:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:33.705 17:59:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:33.705 [2024-10-25 17:59:52.113344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.705 [2024-10-25 17:59:52.113393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:33.705 [2024-10-25 17:59:52.113406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.705 [2024-10-25 17:59:52.113422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.705 [2024-10-25 17:59:52.113447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.705 [2024-10-25 17:59:52.116060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.705 [2024-10-25 17:59:52.116088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:33.705 [2024-10-25 17:59:52.116103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:18:33.705 [2024-10-25 17:59:52.116111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.705 [2024-10-25 17:59:52.116367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.705 [2024-10-25 17:59:52.116376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:33.705 [2024-10-25 17:59:52.116386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:18:33.705 [2024-10-25 17:59:52.116395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.705 [2024-10-25 17:59:52.119641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.705 [2024-10-25 17:59:52.119660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:33.705 [2024-10-25 17:59:52.119671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:18:33.705 [2024-10-25 17:59:52.119680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.705 [2024-10-25 17:59:52.125890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.705 [2024-10-25 17:59:52.125913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:33.705 [2024-10-25 17:59:52.125925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:18:33.705 [2024-10-25 17:59:52.125934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.149233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.149265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:33.965 [2024-10-25 17:59:52.149278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.232 ms 00:18:33.965 [2024-10-25 17:59:52.149286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.164500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.164536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:33.965 [2024-10-25 17:59:52.164550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.174 ms 00:18:33.965 [2024-10-25 17:59:52.164575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.164721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.164732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:33.965 [2024-10-25 17:59:52.164742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:33.965 [2024-10-25 17:59:52.164749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.188041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.188074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:33.965 [2024-10-25 17:59:52.188086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.271 ms 00:18:33.965 [2024-10-25 17:59:52.188093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.213846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.213974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:33.965 [2024-10-25 17:59:52.213994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:18:33.965 [2024-10-25 17:59:52.214002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.235915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.235946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.965 [2024-10-25 17:59:52.235958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.874 ms 00:18:33.965 [2024-10-25 17:59:52.235966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.257848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.965 [2024-10-25 17:59:52.257876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.965 [2024-10-25 17:59:52.257888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.809 ms 00:18:33.965 [2024-10-25 17:59:52.257895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.965 [2024-10-25 17:59:52.257930] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.965 [2024-10-25 17:59:52.257943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.257955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.257962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.257972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.257979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.257988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.257996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.965 [2024-10-25 17:59:52.258361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.966 [2024-10-25 17:59:52.258811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.966 [2024-10-25 17:59:52.258820] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2aaecaf-110f-466b-a18e-9eabbbdbe30d 00:18:33.966 [2024-10-25 17:59:52.258829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.966 [2024-10-25 17:59:52.258841] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.966 [2024-10-25 17:59:52.258848] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.966 [2024-10-25 17:59:52.258857] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.966 [2024-10-25 17:59:52.258866] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.966 [2024-10-25 17:59:52.258875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.966 [2024-10-25 17:59:52.258882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.966 [2024-10-25 17:59:52.258890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.966 [2024-10-25 17:59:52.258897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.966 [2024-10-25 17:59:52.258905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.966 [2024-10-25 17:59:52.258912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.966 [2024-10-25 17:59:52.258922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:18:33.966 [2024-10-25 17:59:52.258929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.270980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.966 [2024-10-25 17:59:52.271009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.966 [2024-10-25 17:59:52.271022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.020 ms 00:18:33.966 [2024-10-25 17:59:52.271031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.271369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.966 [2024-10-25 17:59:52.271382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.966 [2024-10-25 17:59:52.271392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:33.966 [2024-10-25 17:59:52.271399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.312595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.966 [2024-10-25 17:59:52.312730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.966 [2024-10-25 17:59:52.312749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.966 [2024-10-25 17:59:52.312757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.312816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.966 [2024-10-25 17:59:52.312824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.966 [2024-10-25 17:59:52.312834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.966 [2024-10-25 17:59:52.312841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.312922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.966 [2024-10-25 17:59:52.312932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.966 [2024-10-25 17:59:52.312941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.966 [2024-10-25 17:59:52.312948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.312970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.966 [2024-10-25 17:59:52.312977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.966 [2024-10-25 17:59:52.312986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.966 [2024-10-25 17:59:52.312993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.966 [2024-10-25 17:59:52.387736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.966 [2024-10-25 17:59:52.387883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.966 [2024-10-25 17:59:52.387902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.966 [2024-10-25 17:59:52.387910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.225 [2024-10-25 17:59:52.449347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.225 [2024-10-25 17:59:52.449444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.225 [2024-10-25 17:59:52.449582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.225 [2024-10-25 17:59:52.449701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.225 [2024-10-25 17:59:52.449761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.225 [2024-10-25 17:59:52.449821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.449871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.225 [2024-10-25 17:59:52.449880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.225 [2024-10-25 17:59:52.449889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.225 [2024-10-25 17:59:52.449896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.225 [2024-10-25 17:59:52.450017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.642 ms, result 0 00:18:34.225 true 00:18:34.225 17:59:52 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74181 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74181 ']' 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74181 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74181 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74181' 00:18:34.226 killing process with pid 74181 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74181 00:18:34.226 17:59:52 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74181 00:18:46.419 18:00:04 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:50.601 262144+0 records in 00:18:50.601 262144+0 records out 00:18:50.601 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00555 s, 268 MB/s 00:18:50.601 18:00:08 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:51.534 18:00:09 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:51.534 [2024-10-25 18:00:09.945163] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:18:51.534 [2024-10-25 18:00:09.945411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74389 ] 00:18:51.793 [2024-10-25 18:00:10.103223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.793 [2024-10-25 18:00:10.202372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.050 [2024-10-25 18:00:10.452156] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:52.050 [2024-10-25 18:00:10.452219] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:52.310 [2024-10-25 18:00:10.606697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.606738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:52.310 [2024-10-25 18:00:10.606753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:52.310 [2024-10-25 18:00:10.606761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.606802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.606812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.310 [2024-10-25 18:00:10.606823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:52.310 [2024-10-25 18:00:10.606830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.606846] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:52.310 [2024-10-25 18:00:10.607482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:52.310 [2024-10-25 18:00:10.607498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.607506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.310 [2024-10-25 18:00:10.607514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:18:52.310 [2024-10-25 18:00:10.607521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.608505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:52.310 [2024-10-25 18:00:10.623516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.623550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:52.310 [2024-10-25 18:00:10.623573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.012 ms 00:18:52.310 [2024-10-25 18:00:10.623582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.623633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.623645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:52.310 [2024-10-25 18:00:10.623654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:52.310 [2024-10-25 18:00:10.623661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.628445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.628473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.310 [2024-10-25 18:00:10.628482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:18:52.310 [2024-10-25 18:00:10.628490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.628578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.628588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.310 [2024-10-25 18:00:10.628596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:52.310 [2024-10-25 18:00:10.628603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.628638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.628647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:52.310 [2024-10-25 18:00:10.628655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:52.310 [2024-10-25 18:00:10.628662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.628683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:52.310 [2024-10-25 18:00:10.631779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.631803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.310 [2024-10-25 18:00:10.631812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.101 ms 00:18:52.310 [2024-10-25 18:00:10.631823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.631849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.631857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:52.310 [2024-10-25 18:00:10.631865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:52.310 [2024-10-25 18:00:10.631872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.631891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:52.310 [2024-10-25 18:00:10.631908] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:52.310 [2024-10-25 18:00:10.631940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:52.310 [2024-10-25 18:00:10.631957] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:52.310 [2024-10-25 18:00:10.632058] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:52.310 [2024-10-25 18:00:10.632068] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:52.310 [2024-10-25 18:00:10.632079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:52.310 [2024-10-25 18:00:10.632088] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:52.310 [2024-10-25 18:00:10.632096] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:52.310 [2024-10-25 18:00:10.632103] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:52.310 [2024-10-25 18:00:10.632110] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:52.310 [2024-10-25 18:00:10.632118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:52.310 [2024-10-25 18:00:10.632125] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:52.310 [2024-10-25 18:00:10.632134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.632141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:52.310 [2024-10-25 18:00:10.632148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:18:52.310 [2024-10-25 18:00:10.632156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.632238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.310 [2024-10-25 18:00:10.632245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:52.310 [2024-10-25 18:00:10.632252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:52.310 [2024-10-25 18:00:10.632259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.310 [2024-10-25 18:00:10.632358] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:52.310 [2024-10-25 18:00:10.632369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:52.310 [2024-10-25 18:00:10.632377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.310 [2024-10-25 18:00:10.632384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.310 [2024-10-25 18:00:10.632392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:52.310 [2024-10-25 18:00:10.632398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:52.310 [2024-10-25 18:00:10.632405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:52.310 [2024-10-25 18:00:10.632412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:52.311 [2024-10-25 18:00:10.632419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.311 [2024-10-25 18:00:10.632432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:52.311 [2024-10-25 18:00:10.632438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:52.311 [2024-10-25 18:00:10.632445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.311 [2024-10-25 18:00:10.632451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:52.311 [2024-10-25 18:00:10.632458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:52.311 [2024-10-25 18:00:10.632469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:52.311 [2024-10-25 18:00:10.632481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:52.311 [2024-10-25 18:00:10.632501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:52.311 [2024-10-25 18:00:10.632521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:52.311 [2024-10-25 18:00:10.632541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:52.311 [2024-10-25 18:00:10.632576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:52.311 [2024-10-25 18:00:10.632596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.311 [2024-10-25 18:00:10.632609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:52.311 [2024-10-25 18:00:10.632621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:52.311 [2024-10-25 18:00:10.632628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.311 [2024-10-25 18:00:10.632634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:52.311 [2024-10-25 18:00:10.632641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:52.311 [2024-10-25 18:00:10.632647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:52.311 [2024-10-25 18:00:10.632660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:52.311 [2024-10-25 18:00:10.632667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632673] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:52.311 [2024-10-25 18:00:10.632680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:52.311 [2024-10-25 18:00:10.632688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.311 [2024-10-25 18:00:10.632701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:52.311 [2024-10-25 18:00:10.632708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:52.311 [2024-10-25 18:00:10.632715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:52.311 [2024-10-25 18:00:10.632721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:52.311 [2024-10-25 18:00:10.632727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:52.311 [2024-10-25 18:00:10.632734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:52.311 [2024-10-25 18:00:10.632744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:52.311 [2024-10-25 18:00:10.632752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:52.311 [2024-10-25 18:00:10.632767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:52.311 [2024-10-25 18:00:10.632774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:52.311 [2024-10-25 18:00:10.632781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:52.311 [2024-10-25 18:00:10.632789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:52.311 [2024-10-25 18:00:10.632795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:52.311 [2024-10-25 18:00:10.632802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:52.311 [2024-10-25 18:00:10.632810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:52.311 [2024-10-25 18:00:10.632817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:52.311 [2024-10-25 18:00:10.632823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:52.311 [2024-10-25 18:00:10.632859] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:52.311 [2024-10-25 18:00:10.632868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:52.311 [2024-10-25 18:00:10.632885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:52.311 [2024-10-25 18:00:10.632892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:52.311 [2024-10-25 18:00:10.632899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:52.311 [2024-10-25 18:00:10.632906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.311 [2024-10-25 18:00:10.632913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:52.311 [2024-10-25 18:00:10.632920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:18:52.311 [2024-10-25 18:00:10.632927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.311 [2024-10-25 18:00:10.658141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.311 [2024-10-25 18:00:10.658173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.311 [2024-10-25 18:00:10.658183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.161 ms 00:18:52.311 [2024-10-25 18:00:10.658191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.311 [2024-10-25 18:00:10.658271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.311 [2024-10-25 18:00:10.658282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:52.311 [2024-10-25 18:00:10.658290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:52.311 [2024-10-25 18:00:10.658298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.311 [2024-10-25 18:00:10.698337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.311 [2024-10-25 18:00:10.698374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.311 [2024-10-25 18:00:10.698385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.988 ms 00:18:52.311 [2024-10-25 18:00:10.698394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.311 [2024-10-25 18:00:10.698432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.311 [2024-10-25 18:00:10.698441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.312 [2024-10-25 18:00:10.698450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:52.312 [2024-10-25 18:00:10.698460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.312 [2024-10-25 18:00:10.698808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.312 [2024-10-25 18:00:10.698824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.312 [2024-10-25 18:00:10.698832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:52.312 [2024-10-25 18:00:10.698840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.312 [2024-10-25 18:00:10.698956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.312 [2024-10-25 18:00:10.698970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.312 [2024-10-25 18:00:10.698978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:52.312 [2024-10-25 18:00:10.698985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.312 [2024-10-25 18:00:10.711682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.312 [2024-10-25 18:00:10.711711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.312 [2024-10-25 18:00:10.711721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.659 ms 00:18:52.312 [2024-10-25 18:00:10.711731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.312 [2024-10-25 18:00:10.725207] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:52.312 [2024-10-25 18:00:10.725378] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:52.312 [2024-10-25 18:00:10.725393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.312 [2024-10-25 18:00:10.725401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:52.312 [2024-10-25 18:00:10.725410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.574 ms 00:18:52.312 [2024-10-25 18:00:10.725417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.753293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.753338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:52.569 [2024-10-25 18:00:10.753353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.838 ms 00:18:52.569 [2024-10-25 18:00:10.753360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.767836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.767962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:52.569 [2024-10-25 18:00:10.767977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.438 ms 00:18:52.569 [2024-10-25 18:00:10.767984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.781882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.781910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:52.569 [2024-10-25 18:00:10.781920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.870 ms 00:18:52.569 [2024-10-25 18:00:10.781927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.782531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.782549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:52.569 [2024-10-25 18:00:10.782575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:18:52.569 [2024-10-25 18:00:10.782583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.850053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.850252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:52.569 [2024-10-25 18:00:10.850272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.453 ms 00:18:52.569 [2024-10-25 18:00:10.850282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.860699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:52.569 [2024-10-25 18:00:10.863199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.863228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:52.569 [2024-10-25 18:00:10.863240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.876 ms 00:18:52.569 [2024-10-25 18:00:10.863248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.863324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.863334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:52.569 [2024-10-25 18:00:10.863343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:52.569 [2024-10-25 18:00:10.863350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.863412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.863425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:52.569 [2024-10-25 18:00:10.863433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:52.569 [2024-10-25 18:00:10.863441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.863458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.863466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:52.569 [2024-10-25 18:00:10.863474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:52.569 [2024-10-25 18:00:10.863481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.863510] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:52.569 [2024-10-25 18:00:10.863520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.863527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:52.569 [2024-10-25 18:00:10.863537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:52.569 [2024-10-25 18:00:10.863544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.886567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.886599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:52.569 [2024-10-25 18:00:10.886610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.986 ms 00:18:52.569 [2024-10-25 18:00:10.886618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.886693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.569 [2024-10-25 18:00:10.886702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:52.569 [2024-10-25 18:00:10.886711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:52.569 [2024-10-25 18:00:10.886718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.569 [2024-10-25 18:00:10.887595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.466 ms, result 0 00:18:53.503  [2024-10-25T18:00:12.937Z] Copying: 41/1024 [MB] (41 MBps) [2024-10-25T18:00:14.326Z] Copying: 83/1024 [MB] (42 MBps) [2024-10-25T18:00:15.264Z] Copying: 125/1024 [MB] (41 MBps) [2024-10-25T18:00:16.197Z] Copying: 159/1024 [MB] (34 MBps) [2024-10-25T18:00:17.128Z] Copying: 196/1024 [MB] (36 MBps) [2024-10-25T18:00:18.060Z] Copying: 242/1024 [MB] (46 MBps) [2024-10-25T18:00:18.993Z] Copying: 287/1024 [MB] (45 MBps) [2024-10-25T18:00:19.923Z] Copying: 333/1024 [MB] (45 MBps) [2024-10-25T18:00:21.296Z] Copying: 378/1024 [MB] (45 MBps) [2024-10-25T18:00:22.233Z] Copying: 423/1024 [MB] (44 MBps) [2024-10-25T18:00:23.251Z] Copying: 467/1024 [MB] (43 MBps) [2024-10-25T18:00:24.189Z] Copying: 508/1024 [MB] (41 MBps) [2024-10-25T18:00:25.125Z] Copying: 547/1024 [MB] (38 MBps) [2024-10-25T18:00:26.060Z] Copying: 592/1024 [MB] (45 MBps) [2024-10-25T18:00:26.997Z] Copying: 637/1024 [MB] (44 MBps) [2024-10-25T18:00:27.938Z] Copying: 676/1024 [MB] (39 MBps) [2024-10-25T18:00:29.322Z] Copying: 698/1024 [MB] (21 MBps) [2024-10-25T18:00:30.265Z] Copying: 724/1024 [MB] (26 MBps) [2024-10-25T18:00:31.208Z] Copying: 760/1024 [MB] (35 MBps) [2024-10-25T18:00:32.148Z] Copying: 778/1024 [MB] (18 MBps) [2024-10-25T18:00:33.104Z] Copying: 796/1024 [MB] (17 MBps) [2024-10-25T18:00:34.048Z] Copying: 820/1024 [MB] (24 MBps) [2024-10-25T18:00:34.988Z] Copying: 841/1024 [MB] (20 MBps) [2024-10-25T18:00:35.929Z] Copying: 864/1024 [MB] (23 MBps) [2024-10-25T18:00:37.316Z] Copying: 888/1024 [MB] (23 MBps) [2024-10-25T18:00:38.258Z] Copying: 919/1024 [MB] (31 MBps) [2024-10-25T18:00:39.251Z] Copying: 939/1024 [MB] (19 MBps) [2024-10-25T18:00:40.188Z] Copying: 970/1024 [MB] (31 MBps) [2024-10-25T18:00:41.134Z] Copying: 997/1024 [MB] (27 MBps) [2024-10-25T18:00:41.396Z] Copying: 1018/1024 [MB] (20 MBps) [2024-10-25T18:00:41.396Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-10-25 18:00:41.176553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.176622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.961 [2024-10-25 18:00:41.176635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.961 [2024-10-25 18:00:41.176643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.176664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.961 [2024-10-25 18:00:41.179273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.179404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.961 [2024-10-25 18:00:41.179421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:19:22.961 [2024-10-25 18:00:41.179428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.182323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.182353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.961 [2024-10-25 18:00:41.182362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:19:22.961 [2024-10-25 18:00:41.182370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.199444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.199548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.961 [2024-10-25 18:00:41.199614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.059 ms 00:19:22.961 [2024-10-25 18:00:41.199638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.205865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.205901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.961 [2024-10-25 18:00:41.205912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:19:22.961 [2024-10-25 18:00:41.205920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.230290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.230334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.961 [2024-10-25 18:00:41.230346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.305 ms 00:19:22.961 [2024-10-25 18:00:41.230353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.246014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.961 [2024-10-25 18:00:41.246045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.961 [2024-10-25 18:00:41.246056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.629 ms 00:19:22.961 [2024-10-25 18:00:41.246065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.961 [2024-10-25 18:00:41.246185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.962 [2024-10-25 18:00:41.246195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.962 [2024-10-25 18:00:41.246204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:22.962 [2024-10-25 18:00:41.246215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.962 [2024-10-25 18:00:41.271020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.962 [2024-10-25 18:00:41.271050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.962 [2024-10-25 18:00:41.271061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.792 ms 00:19:22.962 [2024-10-25 18:00:41.271070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.962 [2024-10-25 18:00:41.294412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.962 [2024-10-25 18:00:41.294443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.962 [2024-10-25 18:00:41.294461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.310 ms 00:19:22.962 [2024-10-25 18:00:41.294469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.962 [2024-10-25 18:00:41.317245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.962 [2024-10-25 18:00:41.317448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.962 [2024-10-25 18:00:41.317467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.742 ms 00:19:22.962 [2024-10-25 18:00:41.317474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.962 [2024-10-25 18:00:41.340611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.962 [2024-10-25 18:00:41.340654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.962 [2024-10-25 18:00:41.340666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.069 ms 00:19:22.962 [2024-10-25 18:00:41.340673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.962 [2024-10-25 18:00:41.340712] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.962 [2024-10-25 18:00:41.340727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.340995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.962 [2024-10-25 18:00:41.341106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.963 [2024-10-25 18:00:41.341493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.963 [2024-10-25 18:00:41.341504] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2aaecaf-110f-466b-a18e-9eabbbdbe30d 00:19:22.963 [2024-10-25 18:00:41.341512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.963 [2024-10-25 18:00:41.341521] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.963 [2024-10-25 18:00:41.341529] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.963 [2024-10-25 18:00:41.341537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.963 [2024-10-25 18:00:41.341543] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.963 [2024-10-25 18:00:41.341579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.963 [2024-10-25 18:00:41.341587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.963 [2024-10-25 18:00:41.341601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.963 [2024-10-25 18:00:41.341608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.963 [2024-10-25 18:00:41.341616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.963 [2024-10-25 18:00:41.341623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.963 [2024-10-25 18:00:41.341631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:19:22.964 [2024-10-25 18:00:41.341638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.964 [2024-10-25 18:00:41.354118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.964 [2024-10-25 18:00:41.354159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.964 [2024-10-25 18:00:41.354170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.460 ms 00:19:22.964 [2024-10-25 18:00:41.354180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.964 [2024-10-25 18:00:41.354532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.964 [2024-10-25 18:00:41.354541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.964 [2024-10-25 18:00:41.354549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:22.964 [2024-10-25 18:00:41.354572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.964 [2024-10-25 18:00:41.389582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.964 [2024-10-25 18:00:41.389622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.964 [2024-10-25 18:00:41.389635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.964 [2024-10-25 18:00:41.389644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.964 [2024-10-25 18:00:41.389710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.964 [2024-10-25 18:00:41.389718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.964 [2024-10-25 18:00:41.389726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.964 [2024-10-25 18:00:41.389733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.964 [2024-10-25 18:00:41.389792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.964 [2024-10-25 18:00:41.389801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.964 [2024-10-25 18:00:41.389809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.964 [2024-10-25 18:00:41.389816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.964 [2024-10-25 18:00:41.389830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.964 [2024-10-25 18:00:41.389838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.964 [2024-10-25 18:00:41.389846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.964 [2024-10-25 18:00:41.389853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.465797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.465841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.226 [2024-10-25 18:00:41.465853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.465861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.226 [2024-10-25 18:00:41.528390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.226 [2024-10-25 18:00:41.528490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.226 [2024-10-25 18:00:41.528546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.226 [2024-10-25 18:00:41.528682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:23.226 [2024-10-25 18:00:41.528731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.226 [2024-10-25 18:00:41.528788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.226 [2024-10-25 18:00:41.528842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.226 [2024-10-25 18:00:41.528850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.226 [2024-10-25 18:00:41.528857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.226 [2024-10-25 18:00:41.528968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.385 ms, result 0 00:19:25.775 00:19:25.775 00:19:25.775 18:00:44 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:25.775 [2024-10-25 18:00:44.205005] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:19:25.775 [2024-10-25 18:00:44.205130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74735 ] 00:19:26.036 [2024-10-25 18:00:44.364328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.036 [2024-10-25 18:00:44.464674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.297 [2024-10-25 18:00:44.717513] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:26.297 [2024-10-25 18:00:44.717602] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:26.560 [2024-10-25 18:00:44.875687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.875740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:26.560 [2024-10-25 18:00:44.875756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:26.560 [2024-10-25 18:00:44.875763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.875812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.875822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.560 [2024-10-25 18:00:44.875831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:26.560 [2024-10-25 18:00:44.875838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.875858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:26.560 [2024-10-25 18:00:44.876520] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:26.560 [2024-10-25 18:00:44.876538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.876545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.560 [2024-10-25 18:00:44.876570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:19:26.560 [2024-10-25 18:00:44.876578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.877647] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:26.560 [2024-10-25 18:00:44.890353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.890402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:26.560 [2024-10-25 18:00:44.890422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:19:26.560 [2024-10-25 18:00:44.890433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.890495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.890506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:26.560 [2024-10-25 18:00:44.890514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:26.560 [2024-10-25 18:00:44.890520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.895368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.895397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.560 [2024-10-25 18:00:44.895409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:19:26.560 [2024-10-25 18:00:44.895417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.895493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.895502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.560 [2024-10-25 18:00:44.895510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:26.560 [2024-10-25 18:00:44.895517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.895573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.560 [2024-10-25 18:00:44.895583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:26.560 [2024-10-25 18:00:44.895591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:26.560 [2024-10-25 18:00:44.895598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.560 [2024-10-25 18:00:44.895620] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.561 [2024-10-25 18:00:44.898935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.561 [2024-10-25 18:00:44.898961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.561 [2024-10-25 18:00:44.898971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:19:26.561 [2024-10-25 18:00:44.898980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.561 [2024-10-25 18:00:44.899007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.561 [2024-10-25 18:00:44.899015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:26.561 [2024-10-25 18:00:44.899022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:26.561 [2024-10-25 18:00:44.899029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.561 [2024-10-25 18:00:44.899048] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:26.561 [2024-10-25 18:00:44.899067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:26.561 [2024-10-25 18:00:44.899100] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:26.561 [2024-10-25 18:00:44.899117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:26.561 [2024-10-25 18:00:44.899219] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:26.561 [2024-10-25 18:00:44.899229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:26.561 [2024-10-25 18:00:44.899239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:26.561 [2024-10-25 18:00:44.899249] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:26.561 [2024-10-25 18:00:44.899257] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:26.561 [2024-10-25 18:00:44.899265] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:26.561 [2024-10-25 18:00:44.899272] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:26.561 [2024-10-25 18:00:44.899280] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:26.561 [2024-10-25 18:00:44.899286] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:26.561 [2024-10-25 18:00:44.899296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.561 [2024-10-25 18:00:44.899303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:26.561 [2024-10-25 18:00:44.899311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:26.561 [2024-10-25 18:00:44.899317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.561 [2024-10-25 18:00:44.899400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.561 [2024-10-25 18:00:44.899407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:26.561 [2024-10-25 18:00:44.899414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:26.561 [2024-10-25 18:00:44.899421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.561 [2024-10-25 18:00:44.899521] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:26.561 [2024-10-25 18:00:44.899533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:26.561 [2024-10-25 18:00:44.899541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.561 [2024-10-25 18:00:44.899548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.899695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:26.561 [2024-10-25 18:00:44.899728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.899748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:26.561 [2024-10-25 18:00:44.899766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:26.561 [2024-10-25 18:00:44.899785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:26.561 [2024-10-25 18:00:44.899803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.561 [2024-10-25 18:00:44.899820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:26.561 [2024-10-25 18:00:44.899877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:26.561 [2024-10-25 18:00:44.899900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.561 [2024-10-25 18:00:44.899917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:26.561 [2024-10-25 18:00:44.899936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:26.561 [2024-10-25 18:00:44.899961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.899979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:26.561 [2024-10-25 18:00:44.900039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:26.561 [2024-10-25 18:00:44.900099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:26.561 [2024-10-25 18:00:44.900185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:26.561 [2024-10-25 18:00:44.900243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:26.561 [2024-10-25 18:00:44.900328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:26.561 [2024-10-25 18:00:44.900382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.561 [2024-10-25 18:00:44.900451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:26.561 [2024-10-25 18:00:44.900593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:26.561 [2024-10-25 18:00:44.900602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.561 [2024-10-25 18:00:44.900609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:26.561 [2024-10-25 18:00:44.900616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:26.561 [2024-10-25 18:00:44.900622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:26.561 [2024-10-25 18:00:44.900636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:26.561 [2024-10-25 18:00:44.900642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900649] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:26.561 [2024-10-25 18:00:44.900656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:26.561 [2024-10-25 18:00:44.900663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.561 [2024-10-25 18:00:44.900678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:26.561 [2024-10-25 18:00:44.900685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:26.561 [2024-10-25 18:00:44.900691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:26.561 [2024-10-25 18:00:44.900698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:26.561 [2024-10-25 18:00:44.900705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:26.561 [2024-10-25 18:00:44.900711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:26.561 [2024-10-25 18:00:44.900720] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:26.561 [2024-10-25 18:00:44.900730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.561 [2024-10-25 18:00:44.900737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:26.561 [2024-10-25 18:00:44.900745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:26.561 [2024-10-25 18:00:44.900752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:26.561 [2024-10-25 18:00:44.900759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:26.561 [2024-10-25 18:00:44.900766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:26.561 [2024-10-25 18:00:44.900773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:26.561 [2024-10-25 18:00:44.900780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:26.562 [2024-10-25 18:00:44.900787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:26.562 [2024-10-25 18:00:44.900793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:26.562 [2024-10-25 18:00:44.900800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:26.562 [2024-10-25 18:00:44.900807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:26.562 [2024-10-25 18:00:44.900816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:26.562 [2024-10-25 18:00:44.900823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:26.562 [2024-10-25 18:00:44.900830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:26.562 [2024-10-25 18:00:44.900837] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:26.562 [2024-10-25 18:00:44.900845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.562 [2024-10-25 18:00:44.900856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:26.562 [2024-10-25 18:00:44.900863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:26.562 [2024-10-25 18:00:44.900871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:26.562 [2024-10-25 18:00:44.900878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:26.562 [2024-10-25 18:00:44.900885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.900892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:26.562 [2024-10-25 18:00:44.900900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:19:26.562 [2024-10-25 18:00:44.900907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.926571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.926700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.562 [2024-10-25 18:00:44.926716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.605 ms 00:19:26.562 [2024-10-25 18:00:44.926723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.926808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.926820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.562 [2024-10-25 18:00:44.926828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:26.562 [2024-10-25 18:00:44.926835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.970461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.970500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.562 [2024-10-25 18:00:44.970512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.574 ms 00:19:26.562 [2024-10-25 18:00:44.970520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.970572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.970582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.562 [2024-10-25 18:00:44.970591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:26.562 [2024-10-25 18:00:44.970601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.970974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.970990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.562 [2024-10-25 18:00:44.970999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:26.562 [2024-10-25 18:00:44.971007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.971136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.971152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.562 [2024-10-25 18:00:44.971160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:26.562 [2024-10-25 18:00:44.971168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.562 [2024-10-25 18:00:44.984147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.562 [2024-10-25 18:00:44.984176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.562 [2024-10-25 18:00:44.984186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.959 ms 00:19:26.562 [2024-10-25 18:00:44.984196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.823 [2024-10-25 18:00:44.997273] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:26.823 [2024-10-25 18:00:44.997309] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:26.823 [2024-10-25 18:00:44.997320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.823 [2024-10-25 18:00:44.997328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:26.823 [2024-10-25 18:00:44.997337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.033 ms 00:19:26.823 [2024-10-25 18:00:44.997344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.823 [2024-10-25 18:00:45.021480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.823 [2024-10-25 18:00:45.021523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:26.823 [2024-10-25 18:00:45.021534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.098 ms 00:19:26.823 [2024-10-25 18:00:45.021541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.823 [2024-10-25 18:00:45.033269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.823 [2024-10-25 18:00:45.033301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:26.823 [2024-10-25 18:00:45.033311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.664 ms 00:19:26.823 [2024-10-25 18:00:45.033318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.044724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.044754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:26.824 [2024-10-25 18:00:45.044764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.372 ms 00:19:26.824 [2024-10-25 18:00:45.044771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.045369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.045387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.824 [2024-10-25 18:00:45.045396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:19:26.824 [2024-10-25 18:00:45.045404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.101786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.101842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:26.824 [2024-10-25 18:00:45.101856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.361 ms 00:19:26.824 [2024-10-25 18:00:45.101869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.112461] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:26.824 [2024-10-25 18:00:45.114961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.115106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.824 [2024-10-25 18:00:45.115124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.043 ms 00:19:26.824 [2024-10-25 18:00:45.115133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.115239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.115250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:26.824 [2024-10-25 18:00:45.115260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:26.824 [2024-10-25 18:00:45.115269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.115340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.115351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.824 [2024-10-25 18:00:45.115360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:26.824 [2024-10-25 18:00:45.115369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.115388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.115397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.824 [2024-10-25 18:00:45.115405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:26.824 [2024-10-25 18:00:45.115414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.115445] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:26.824 [2024-10-25 18:00:45.115457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.115466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:26.824 [2024-10-25 18:00:45.115475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:26.824 [2024-10-25 18:00:45.115483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.139481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.139517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.824 [2024-10-25 18:00:45.139528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.980 ms 00:19:26.824 [2024-10-25 18:00:45.139535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.139627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.824 [2024-10-25 18:00:45.139638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.824 [2024-10-25 18:00:45.139647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:26.824 [2024-10-25 18:00:45.139654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.824 [2024-10-25 18:00:45.140848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.734 ms, result 0 00:19:28.214  [2024-10-25T18:00:47.592Z] Copying: 15/1024 [MB] (15 MBps) [2024-10-25T18:00:48.535Z] Copying: 38/1024 [MB] (22 MBps) [2024-10-25T18:00:49.480Z] Copying: 60/1024 [MB] (21 MBps) [2024-10-25T18:00:50.424Z] Copying: 93/1024 [MB] (32 MBps) [2024-10-25T18:00:51.477Z] Copying: 131/1024 [MB] (38 MBps) [2024-10-25T18:00:52.434Z] Copying: 153/1024 [MB] (22 MBps) [2024-10-25T18:00:53.378Z] Copying: 175/1024 [MB] (22 MBps) [2024-10-25T18:00:54.323Z] Copying: 199/1024 [MB] (23 MBps) [2024-10-25T18:00:55.710Z] Copying: 240/1024 [MB] (40 MBps) [2024-10-25T18:00:56.652Z] Copying: 273/1024 [MB] (33 MBps) [2024-10-25T18:00:57.597Z] Copying: 309/1024 [MB] (36 MBps) [2024-10-25T18:00:58.542Z] Copying: 334/1024 [MB] (25 MBps) [2024-10-25T18:00:59.488Z] Copying: 360/1024 [MB] (25 MBps) [2024-10-25T18:01:00.477Z] Copying: 381/1024 [MB] (20 MBps) [2024-10-25T18:01:01.439Z] Copying: 406/1024 [MB] (24 MBps) [2024-10-25T18:01:02.377Z] Copying: 424/1024 [MB] (18 MBps) [2024-10-25T18:01:03.317Z] Copying: 461/1024 [MB] (36 MBps) [2024-10-25T18:01:04.707Z] Copying: 488/1024 [MB] (26 MBps) [2024-10-25T18:01:05.648Z] Copying: 507/1024 [MB] (19 MBps) [2024-10-25T18:01:06.590Z] Copying: 540/1024 [MB] (32 MBps) [2024-10-25T18:01:07.532Z] Copying: 568/1024 [MB] (27 MBps) [2024-10-25T18:01:08.477Z] Copying: 590/1024 [MB] (22 MBps) [2024-10-25T18:01:09.464Z] Copying: 614/1024 [MB] (24 MBps) [2024-10-25T18:01:10.410Z] Copying: 640/1024 [MB] (25 MBps) [2024-10-25T18:01:11.353Z] Copying: 664/1024 [MB] (23 MBps) [2024-10-25T18:01:12.741Z] Copying: 687/1024 [MB] (23 MBps) [2024-10-25T18:01:13.686Z] Copying: 707/1024 [MB] (19 MBps) [2024-10-25T18:01:14.628Z] Copying: 728/1024 [MB] (21 MBps) [2024-10-25T18:01:15.570Z] Copying: 753/1024 [MB] (24 MBps) [2024-10-25T18:01:16.514Z] Copying: 774/1024 [MB] (21 MBps) [2024-10-25T18:01:17.457Z] Copying: 797/1024 [MB] (22 MBps) [2024-10-25T18:01:18.475Z] Copying: 824/1024 [MB] (26 MBps) [2024-10-25T18:01:19.440Z] Copying: 841/1024 [MB] (17 MBps) [2024-10-25T18:01:20.381Z] Copying: 854/1024 [MB] (12 MBps) [2024-10-25T18:01:21.327Z] Copying: 872/1024 [MB] (18 MBps) [2024-10-25T18:01:22.714Z] Copying: 885/1024 [MB] (13 MBps) [2024-10-25T18:01:23.656Z] Copying: 898/1024 [MB] (13 MBps) [2024-10-25T18:01:24.606Z] Copying: 909/1024 [MB] (10 MBps) [2024-10-25T18:01:25.549Z] Copying: 920/1024 [MB] (11 MBps) [2024-10-25T18:01:26.495Z] Copying: 952800/1048576 [kB] (10040 kBps) [2024-10-25T18:01:27.438Z] Copying: 962880/1048576 [kB] (10080 kBps) [2024-10-25T18:01:28.383Z] Copying: 950/1024 [MB] (10 MBps) [2024-10-25T18:01:29.326Z] Copying: 961/1024 [MB] (10 MBps) [2024-10-25T18:01:30.327Z] Copying: 972/1024 [MB] (10 MBps) [2024-10-25T18:01:31.714Z] Copying: 982/1024 [MB] (10 MBps) [2024-10-25T18:01:32.655Z] Copying: 995/1024 [MB] (12 MBps) [2024-10-25T18:01:33.599Z] Copying: 1008/1024 [MB] (13 MBps) [2024-10-25T18:01:34.171Z] Copying: 1042848/1048576 [kB] (10152 kBps) [2024-10-25T18:01:34.171Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-10-25 18:01:33.908764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.908846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.736 [2024-10-25 18:01:33.908863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:15.736 [2024-10-25 18:01:33.908872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.908895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.736 [2024-10-25 18:01:33.913701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.913767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.736 [2024-10-25 18:01:33.913786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 00:20:15.736 [2024-10-25 18:01:33.913821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.914314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.914343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.736 [2024-10-25 18:01:33.914361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:20:15.736 [2024-10-25 18:01:33.914376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.922747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.922783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.736 [2024-10-25 18:01:33.922794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.344 ms 00:20:15.736 [2024-10-25 18:01:33.922802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.929176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.929328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.736 [2024-10-25 18:01:33.929393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.341 ms 00:20:15.736 [2024-10-25 18:01:33.929419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.956962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.957222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.736 [2024-10-25 18:01:33.957295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.433 ms 00:20:15.736 [2024-10-25 18:01:33.957320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.974509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.974756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.736 [2024-10-25 18:01:33.974832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.083 ms 00:20:15.736 [2024-10-25 18:01:33.974845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:33.975057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.736 [2024-10-25 18:01:33.975075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.736 [2024-10-25 18:01:33.975095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:15.736 [2024-10-25 18:01:33.975104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.736 [2024-10-25 18:01:34.002008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.737 [2024-10-25 18:01:34.002283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.737 [2024-10-25 18:01:34.002307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.885 ms 00:20:15.737 [2024-10-25 18:01:34.002317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.737 [2024-10-25 18:01:34.028647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.737 [2024-10-25 18:01:34.028910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.737 [2024-10-25 18:01:34.028934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.164 ms 00:20:15.737 [2024-10-25 18:01:34.028942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.737 [2024-10-25 18:01:34.054367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.737 [2024-10-25 18:01:34.054411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.737 [2024-10-25 18:01:34.054425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.380 ms 00:20:15.737 [2024-10-25 18:01:34.054433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.737 [2024-10-25 18:01:34.079917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.737 [2024-10-25 18:01:34.080151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.737 [2024-10-25 18:01:34.080174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.384 ms 00:20:15.737 [2024-10-25 18:01:34.080183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.737 [2024-10-25 18:01:34.080230] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.737 [2024-10-25 18:01:34.080248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.737 [2024-10-25 18:01:34.080801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.080999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.738 [2024-10-25 18:01:34.081135] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.738 [2024-10-25 18:01:34.081144] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2aaecaf-110f-466b-a18e-9eabbbdbe30d 00:20:15.738 [2024-10-25 18:01:34.081156] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.738 [2024-10-25 18:01:34.081164] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.738 [2024-10-25 18:01:34.081172] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.738 [2024-10-25 18:01:34.081181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.738 [2024-10-25 18:01:34.081188] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.738 [2024-10-25 18:01:34.081196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.738 [2024-10-25 18:01:34.081212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.738 [2024-10-25 18:01:34.081219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.738 [2024-10-25 18:01:34.081226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.738 [2024-10-25 18:01:34.081234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.738 [2024-10-25 18:01:34.081242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.738 [2024-10-25 18:01:34.081251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:20:15.738 [2024-10-25 18:01:34.081258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.738 [2024-10-25 18:01:34.095371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.738 [2024-10-25 18:01:34.095626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.738 [2024-10-25 18:01:34.095646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.074 ms 00:20:15.738 [2024-10-25 18:01:34.095656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.738 [2024-10-25 18:01:34.096053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.738 [2024-10-25 18:01:34.096062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.738 [2024-10-25 18:01:34.096072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:15.738 [2024-10-25 18:01:34.096088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.738 [2024-10-25 18:01:34.132453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.738 [2024-10-25 18:01:34.132503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.738 [2024-10-25 18:01:34.132516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.738 [2024-10-25 18:01:34.132525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.738 [2024-10-25 18:01:34.132628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.738 [2024-10-25 18:01:34.132638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.738 [2024-10-25 18:01:34.132653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.738 [2024-10-25 18:01:34.132661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.738 [2024-10-25 18:01:34.132775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.738 [2024-10-25 18:01:34.132786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.738 [2024-10-25 18:01:34.132795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.738 [2024-10-25 18:01:34.132803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.738 [2024-10-25 18:01:34.132820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.738 [2024-10-25 18:01:34.132829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.738 [2024-10-25 18:01:34.132838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.738 [2024-10-25 18:01:34.132846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.220380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.220449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.000 [2024-10-25 18:01:34.220465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.220474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.290434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.290510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.000 [2024-10-25 18:01:34.290525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.290535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.290633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.290647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.000 [2024-10-25 18:01:34.290656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.290665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.290725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.290750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.000 [2024-10-25 18:01:34.290761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.290771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.290871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.290889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.000 [2024-10-25 18:01:34.290899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.290908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.290942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.290954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.000 [2024-10-25 18:01:34.290964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.290972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.291015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.291031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.000 [2024-10-25 18:01:34.291039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.291048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.291095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.000 [2024-10-25 18:01:34.291109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.000 [2024-10-25 18:01:34.291118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.000 [2024-10-25 18:01:34.291128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.000 [2024-10-25 18:01:34.291264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.481 ms, result 0 00:20:16.942 00:20:16.942 00:20:16.942 18:01:35 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:18.856 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:18.856 18:01:37 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:19.116 [2024-10-25 18:01:37.294696] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:20:19.116 [2024-10-25 18:01:37.294957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75287 ] 00:20:19.116 [2024-10-25 18:01:37.456071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.386 [2024-10-25 18:01:37.553180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.386 [2024-10-25 18:01:37.802482] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.386 [2024-10-25 18:01:37.802539] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.660 [2024-10-25 18:01:37.955885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.955932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.660 [2024-10-25 18:01:37.955948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.660 [2024-10-25 18:01:37.955957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.955999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.956009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.660 [2024-10-25 18:01:37.956019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:19.660 [2024-10-25 18:01:37.956027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.956046] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.660 [2024-10-25 18:01:37.956708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.660 [2024-10-25 18:01:37.956731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.956740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.660 [2024-10-25 18:01:37.956748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:20:19.660 [2024-10-25 18:01:37.956755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.957735] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.660 [2024-10-25 18:01:37.969867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.969897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.660 [2024-10-25 18:01:37.969908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.133 ms 00:20:19.660 [2024-10-25 18:01:37.969916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.969966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.969978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.660 [2024-10-25 18:01:37.969987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:19.660 [2024-10-25 18:01:37.969994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.974474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.974503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.660 [2024-10-25 18:01:37.974512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.424 ms 00:20:19.660 [2024-10-25 18:01:37.974519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.974607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.974618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.660 [2024-10-25 18:01:37.974626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:19.660 [2024-10-25 18:01:37.974633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.974669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.974679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.660 [2024-10-25 18:01:37.974686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.660 [2024-10-25 18:01:37.974693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.974713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.660 [2024-10-25 18:01:37.978167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.978193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.660 [2024-10-25 18:01:37.978203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:20:19.660 [2024-10-25 18:01:37.978213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.978239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.978247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.660 [2024-10-25 18:01:37.978259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:19.660 [2024-10-25 18:01:37.978266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.978285] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.660 [2024-10-25 18:01:37.978303] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.660 [2024-10-25 18:01:37.978336] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.660 [2024-10-25 18:01:37.978353] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.660 [2024-10-25 18:01:37.978455] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.660 [2024-10-25 18:01:37.978464] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.660 [2024-10-25 18:01:37.978475] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.660 [2024-10-25 18:01:37.978488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.660 [2024-10-25 18:01:37.978497] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.660 [2024-10-25 18:01:37.978505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:19.660 [2024-10-25 18:01:37.978512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.660 [2024-10-25 18:01:37.978519] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.660 [2024-10-25 18:01:37.978526] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.660 [2024-10-25 18:01:37.978535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.978542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.660 [2024-10-25 18:01:37.978550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:19.660 [2024-10-25 18:01:37.978574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.978656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.660 [2024-10-25 18:01:37.978664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.660 [2024-10-25 18:01:37.978672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:19.660 [2024-10-25 18:01:37.978679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.660 [2024-10-25 18:01:37.978790] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.660 [2024-10-25 18:01:37.978803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.660 [2024-10-25 18:01:37.978811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.660 [2024-10-25 18:01:37.978819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.661 [2024-10-25 18:01:37.978833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:19.661 [2024-10-25 18:01:37.978846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.661 [2024-10-25 18:01:37.978853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.661 [2024-10-25 18:01:37.978866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.661 [2024-10-25 18:01:37.978873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:19.661 [2024-10-25 18:01:37.978879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.661 [2024-10-25 18:01:37.978886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.661 [2024-10-25 18:01:37.978892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:19.661 [2024-10-25 18:01:37.978905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.661 [2024-10-25 18:01:37.978920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:19.661 [2024-10-25 18:01:37.978926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.661 [2024-10-25 18:01:37.978939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.661 [2024-10-25 18:01:37.978952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.661 [2024-10-25 18:01:37.978958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.661 [2024-10-25 18:01:37.978971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.661 [2024-10-25 18:01:37.978977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:19.661 [2024-10-25 18:01:37.978984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.661 [2024-10-25 18:01:37.978990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.661 [2024-10-25 18:01:37.978996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:19.661 [2024-10-25 18:01:37.979003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.661 [2024-10-25 18:01:37.979009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.661 [2024-10-25 18:01:37.979015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:19.661 [2024-10-25 18:01:37.979021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.661 [2024-10-25 18:01:37.979028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.661 [2024-10-25 18:01:37.979034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:19.661 [2024-10-25 18:01:37.979040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.661 [2024-10-25 18:01:37.979047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.661 [2024-10-25 18:01:37.979053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:19.661 [2024-10-25 18:01:37.979060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.979067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.661 [2024-10-25 18:01:37.979073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:19.661 [2024-10-25 18:01:37.979079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.979085] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.661 [2024-10-25 18:01:37.979092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.661 [2024-10-25 18:01:37.979099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.661 [2024-10-25 18:01:37.979107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.661 [2024-10-25 18:01:37.979114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.661 [2024-10-25 18:01:37.979123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.661 [2024-10-25 18:01:37.979130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.661 [2024-10-25 18:01:37.979137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.661 [2024-10-25 18:01:37.979144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.661 [2024-10-25 18:01:37.979150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.661 [2024-10-25 18:01:37.979158] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.661 [2024-10-25 18:01:37.979167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:19.661 [2024-10-25 18:01:37.979182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:19.661 [2024-10-25 18:01:37.979189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:19.661 [2024-10-25 18:01:37.979196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:19.661 [2024-10-25 18:01:37.979203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:19.661 [2024-10-25 18:01:37.979211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:19.661 [2024-10-25 18:01:37.979218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:19.661 [2024-10-25 18:01:37.979225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:19.661 [2024-10-25 18:01:37.979232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:19.661 [2024-10-25 18:01:37.979239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:19.661 [2024-10-25 18:01:37.979274] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.661 [2024-10-25 18:01:37.979282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.661 [2024-10-25 18:01:37.979298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.661 [2024-10-25 18:01:37.979305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.661 [2024-10-25 18:01:37.979313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.661 [2024-10-25 18:01:37.979319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:37.979326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.661 [2024-10-25 18:01:37.979333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:20:19.661 [2024-10-25 18:01:37.979340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.004447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.004479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.661 [2024-10-25 18:01:38.004490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.066 ms 00:20:19.661 [2024-10-25 18:01:38.004497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.004596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.004617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.661 [2024-10-25 18:01:38.004625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:19.661 [2024-10-25 18:01:38.004632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.044516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.044564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.661 [2024-10-25 18:01:38.044576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.836 ms 00:20:19.661 [2024-10-25 18:01:38.044585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.044634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.044643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.661 [2024-10-25 18:01:38.044652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.661 [2024-10-25 18:01:38.044662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.044992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.045014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.661 [2024-10-25 18:01:38.045022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:19.661 [2024-10-25 18:01:38.045030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.045147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.045163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.661 [2024-10-25 18:01:38.045171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:19.661 [2024-10-25 18:01:38.045178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.060640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.661 [2024-10-25 18:01:38.060671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.661 [2024-10-25 18:01:38.060682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.438 ms 00:20:19.661 [2024-10-25 18:01:38.060690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.661 [2024-10-25 18:01:38.073064] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:19.661 [2024-10-25 18:01:38.073098] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.661 [2024-10-25 18:01:38.073111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.662 [2024-10-25 18:01:38.073119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.662 [2024-10-25 18:01:38.073128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.311 ms 00:20:19.662 [2024-10-25 18:01:38.073136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.097403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.097446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.920 [2024-10-25 18:01:38.097457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.229 ms 00:20:19.920 [2024-10-25 18:01:38.097465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.108905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.108938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.920 [2024-10-25 18:01:38.108948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.399 ms 00:20:19.920 [2024-10-25 18:01:38.108956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.120406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.120536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.920 [2024-10-25 18:01:38.120552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.418 ms 00:20:19.920 [2024-10-25 18:01:38.120574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.121173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.121191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.920 [2024-10-25 18:01:38.121200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:20:19.920 [2024-10-25 18:01:38.121207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.175330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.175505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.920 [2024-10-25 18:01:38.175524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.104 ms 00:20:19.920 [2024-10-25 18:01:38.175537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.185889] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.920 [2024-10-25 18:01:38.188146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.188174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.920 [2024-10-25 18:01:38.188185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.554 ms 00:20:19.920 [2024-10-25 18:01:38.188194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.188279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.188290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.920 [2024-10-25 18:01:38.188298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.920 [2024-10-25 18:01:38.188306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.188377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.188387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.920 [2024-10-25 18:01:38.188396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:19.920 [2024-10-25 18:01:38.188403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.188422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.188430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.920 [2024-10-25 18:01:38.188437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.920 [2024-10-25 18:01:38.188444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.188472] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.920 [2024-10-25 18:01:38.188484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.188491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.920 [2024-10-25 18:01:38.188499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:19.920 [2024-10-25 18:01:38.188506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.211189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.211220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.920 [2024-10-25 18:01:38.211231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.666 ms 00:20:19.920 [2024-10-25 18:01:38.211239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.211308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.920 [2024-10-25 18:01:38.211317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.920 [2024-10-25 18:01:38.211325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:19.920 [2024-10-25 18:01:38.211333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.920 [2024-10-25 18:01:38.212208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.930 ms, result 0 00:20:20.854  [2024-10-25T18:01:40.664Z] Copying: 42/1024 [MB] (42 MBps) [2024-10-25T18:01:41.236Z] Copying: 84/1024 [MB] (41 MBps) [2024-10-25T18:01:42.620Z] Copying: 114/1024 [MB] (29 MBps) [2024-10-25T18:01:43.564Z] Copying: 148/1024 [MB] (34 MBps) [2024-10-25T18:01:44.508Z] Copying: 173/1024 [MB] (25 MBps) [2024-10-25T18:01:45.444Z] Copying: 203/1024 [MB] (29 MBps) [2024-10-25T18:01:46.377Z] Copying: 237/1024 [MB] (34 MBps) [2024-10-25T18:01:47.312Z] Copying: 277/1024 [MB] (39 MBps) [2024-10-25T18:01:48.246Z] Copying: 297/1024 [MB] (20 MBps) [2024-10-25T18:01:49.623Z] Copying: 323/1024 [MB] (26 MBps) [2024-10-25T18:01:50.563Z] Copying: 364/1024 [MB] (41 MBps) [2024-10-25T18:01:51.504Z] Copying: 399/1024 [MB] (34 MBps) [2024-10-25T18:01:52.511Z] Copying: 416/1024 [MB] (17 MBps) [2024-10-25T18:01:53.454Z] Copying: 444/1024 [MB] (28 MBps) [2024-10-25T18:01:54.396Z] Copying: 473/1024 [MB] (28 MBps) [2024-10-25T18:01:55.337Z] Copying: 495/1024 [MB] (22 MBps) [2024-10-25T18:01:56.279Z] Copying: 513/1024 [MB] (17 MBps) [2024-10-25T18:01:57.659Z] Copying: 531/1024 [MB] (17 MBps) [2024-10-25T18:01:58.227Z] Copying: 554/1024 [MB] (22 MBps) [2024-10-25T18:01:59.607Z] Copying: 582/1024 [MB] (28 MBps) [2024-10-25T18:02:00.550Z] Copying: 601/1024 [MB] (19 MBps) [2024-10-25T18:02:01.494Z] Copying: 620/1024 [MB] (18 MBps) [2024-10-25T18:02:02.436Z] Copying: 638/1024 [MB] (18 MBps) [2024-10-25T18:02:03.377Z] Copying: 660/1024 [MB] (21 MBps) [2024-10-25T18:02:04.315Z] Copying: 676/1024 [MB] (16 MBps) [2024-10-25T18:02:05.258Z] Copying: 694/1024 [MB] (17 MBps) [2024-10-25T18:02:06.647Z] Copying: 704/1024 [MB] (10 MBps) [2024-10-25T18:02:07.588Z] Copying: 715/1024 [MB] (10 MBps) [2024-10-25T18:02:08.523Z] Copying: 727/1024 [MB] (12 MBps) [2024-10-25T18:02:09.456Z] Copying: 751/1024 [MB] (24 MBps) [2024-10-25T18:02:10.390Z] Copying: 793/1024 [MB] (41 MBps) [2024-10-25T18:02:11.322Z] Copying: 834/1024 [MB] (41 MBps) [2024-10-25T18:02:12.256Z] Copying: 875/1024 [MB] (40 MBps) [2024-10-25T18:02:13.648Z] Copying: 916/1024 [MB] (41 MBps) [2024-10-25T18:02:14.284Z] Copying: 957/1024 [MB] (41 MBps) [2024-10-25T18:02:15.658Z] Copying: 1000/1024 [MB] (42 MBps) [2024-10-25T18:02:15.917Z] Copying: 1023/1024 [MB] (23 MBps) [2024-10-25T18:02:15.917Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-25 18:02:15.766097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.766167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:57.482 [2024-10-25 18:02:15.766184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.482 [2024-10-25 18:02:15.766193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.482 [2024-10-25 18:02:15.767161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.482 [2024-10-25 18:02:15.771877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.771911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:57.482 [2024-10-25 18:02:15.771922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:20:57.482 [2024-10-25 18:02:15.771930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.482 [2024-10-25 18:02:15.784532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.784574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:57.482 [2024-10-25 18:02:15.784585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.465 ms 00:20:57.482 [2024-10-25 18:02:15.784594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.482 [2024-10-25 18:02:15.803608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.803769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:57.482 [2024-10-25 18:02:15.803787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.983 ms 00:20:57.482 [2024-10-25 18:02:15.803795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.482 [2024-10-25 18:02:15.809937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.809966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:57.482 [2024-10-25 18:02:15.809977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:20:57.482 [2024-10-25 18:02:15.809986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.482 [2024-10-25 18:02:15.834238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.834268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:57.482 [2024-10-25 18:02:15.834279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.198 ms 00:20:57.482 [2024-10-25 18:02:15.834287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.482 [2024-10-25 18:02:15.848424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.482 [2024-10-25 18:02:15.848456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:57.482 [2024-10-25 18:02:15.848470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.107 ms 00:20:57.482 [2024-10-25 18:02:15.848479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.741 [2024-10-25 18:02:15.917842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.741 [2024-10-25 18:02:15.917974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:57.741 [2024-10-25 18:02:15.917990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.329 ms 00:20:57.741 [2024-10-25 18:02:15.917999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.741 [2024-10-25 18:02:15.941233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.741 [2024-10-25 18:02:15.941297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:57.741 [2024-10-25 18:02:15.941310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.217 ms 00:20:57.741 [2024-10-25 18:02:15.941317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.741 [2024-10-25 18:02:15.964214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.741 [2024-10-25 18:02:15.964371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:57.741 [2024-10-25 18:02:15.964386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.822 ms 00:20:57.741 [2024-10-25 18:02:15.964394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.741 [2024-10-25 18:02:15.986300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.741 [2024-10-25 18:02:15.986329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:57.741 [2024-10-25 18:02:15.986339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.880 ms 00:20:57.741 [2024-10-25 18:02:15.986346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.741 [2024-10-25 18:02:16.008833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.742 [2024-10-25 18:02:16.008862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:57.742 [2024-10-25 18:02:16.008872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.435 ms 00:20:57.742 [2024-10-25 18:02:16.008880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.742 [2024-10-25 18:02:16.008910] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:57.742 [2024-10-25 18:02:16.008924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117760 / 261120 wr_cnt: 1 state: open 00:20:57.742 [2024-10-25 18:02:16.008934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.008996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:57.742 [2024-10-25 18:02:16.009608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:57.743 [2024-10-25 18:02:16.009727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:57.743 [2024-10-25 18:02:16.009735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2aaecaf-110f-466b-a18e-9eabbbdbe30d 00:20:57.743 [2024-10-25 18:02:16.009743] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117760 00:20:57.743 [2024-10-25 18:02:16.009750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118720 00:20:57.743 [2024-10-25 18:02:16.009757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117760 00:20:57.743 [2024-10-25 18:02:16.009766] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:20:57.743 [2024-10-25 18:02:16.009773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:57.743 [2024-10-25 18:02:16.009781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:57.743 [2024-10-25 18:02:16.009797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:57.743 [2024-10-25 18:02:16.009804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:57.743 [2024-10-25 18:02:16.009810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:57.743 [2024-10-25 18:02:16.009818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.743 [2024-10-25 18:02:16.009825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:57.743 [2024-10-25 18:02:16.009833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:20:57.743 [2024-10-25 18:02:16.009841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.023011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.743 [2024-10-25 18:02:16.023040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:57.743 [2024-10-25 18:02:16.023049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.154 ms 00:20:57.743 [2024-10-25 18:02:16.023057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.023423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.743 [2024-10-25 18:02:16.023433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:57.743 [2024-10-25 18:02:16.023441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:57.743 [2024-10-25 18:02:16.023448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.058571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.743 [2024-10-25 18:02:16.058600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.743 [2024-10-25 18:02:16.058614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.743 [2024-10-25 18:02:16.058622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.058674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.743 [2024-10-25 18:02:16.058682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.743 [2024-10-25 18:02:16.058690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.743 [2024-10-25 18:02:16.058698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.058745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.743 [2024-10-25 18:02:16.058755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.743 [2024-10-25 18:02:16.058763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.743 [2024-10-25 18:02:16.058775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.058790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.743 [2024-10-25 18:02:16.058797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.743 [2024-10-25 18:02:16.058804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.743 [2024-10-25 18:02:16.058811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.743 [2024-10-25 18:02:16.129652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.743 [2024-10-25 18:02:16.129794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.743 [2024-10-25 18:02:16.129807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.743 [2024-10-25 18:02:16.129818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.002 [2024-10-25 18:02:16.181272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.002 [2024-10-25 18:02:16.181361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.002 [2024-10-25 18:02:16.181412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.002 [2024-10-25 18:02:16.181515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:58.002 [2024-10-25 18:02:16.181577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.002 [2024-10-25 18:02:16.181645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.002 [2024-10-25 18:02:16.181702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.002 [2024-10-25 18:02:16.181709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.002 [2024-10-25 18:02:16.181716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.002 [2024-10-25 18:02:16.181819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.780 ms, result 0 00:20:59.903 00:20:59.903 00:20:59.903 18:02:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:59.903 [2024-10-25 18:02:17.995118] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:20:59.903 [2024-10-25 18:02:17.995234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75705 ] 00:20:59.903 [2024-10-25 18:02:18.151234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.903 [2024-10-25 18:02:18.237590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.161 [2024-10-25 18:02:18.463578] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.161 [2024-10-25 18:02:18.463630] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.420 [2024-10-25 18:02:18.616179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.420 [2024-10-25 18:02:18.616356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.420 [2024-10-25 18:02:18.616377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:00.420 [2024-10-25 18:02:18.616384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.420 [2024-10-25 18:02:18.616426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.420 [2024-10-25 18:02:18.616434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.420 [2024-10-25 18:02:18.616444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:00.420 [2024-10-25 18:02:18.616450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.420 [2024-10-25 18:02:18.616466] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.420 [2024-10-25 18:02:18.617028] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.420 [2024-10-25 18:02:18.617044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.420 [2024-10-25 18:02:18.617051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.420 [2024-10-25 18:02:18.617057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:21:00.420 [2024-10-25 18:02:18.617063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.420 [2024-10-25 18:02:18.618318] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:00.420 [2024-10-25 18:02:18.628586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.420 [2024-10-25 18:02:18.628612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:00.420 [2024-10-25 18:02:18.628623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.270 ms 00:21:00.421 [2024-10-25 18:02:18.628629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.628676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.628687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:00.421 [2024-10-25 18:02:18.628694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:00.421 [2024-10-25 18:02:18.628700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.634870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.635010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.421 [2024-10-25 18:02:18.635022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:21:00.421 [2024-10-25 18:02:18.635030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.635092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.635100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.421 [2024-10-25 18:02:18.635107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:00.421 [2024-10-25 18:02:18.635113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.635147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.635156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.421 [2024-10-25 18:02:18.635163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:00.421 [2024-10-25 18:02:18.635169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.635184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.421 [2024-10-25 18:02:18.638128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.638236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.421 [2024-10-25 18:02:18.638249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.949 ms 00:21:00.421 [2024-10-25 18:02:18.638259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.638289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.638297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.421 [2024-10-25 18:02:18.638303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:00.421 [2024-10-25 18:02:18.638309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.638324] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:00.421 [2024-10-25 18:02:18.638340] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:00.421 [2024-10-25 18:02:18.638369] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:00.421 [2024-10-25 18:02:18.638384] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:00.421 [2024-10-25 18:02:18.638466] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.421 [2024-10-25 18:02:18.638476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.421 [2024-10-25 18:02:18.638484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:00.421 [2024-10-25 18:02:18.638492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638500] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.421 [2024-10-25 18:02:18.638512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.421 [2024-10-25 18:02:18.638518] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.421 [2024-10-25 18:02:18.638524] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.421 [2024-10-25 18:02:18.638532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.638538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.421 [2024-10-25 18:02:18.638544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:00.421 [2024-10-25 18:02:18.638549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.638630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.421 [2024-10-25 18:02:18.638638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.421 [2024-10-25 18:02:18.638644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:00.421 [2024-10-25 18:02:18.638650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.421 [2024-10-25 18:02:18.638735] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.421 [2024-10-25 18:02:18.638745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.421 [2024-10-25 18:02:18.638752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.421 [2024-10-25 18:02:18.638770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.421 [2024-10-25 18:02:18.638787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.421 [2024-10-25 18:02:18.638798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.421 [2024-10-25 18:02:18.638804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.421 [2024-10-25 18:02:18.638810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.421 [2024-10-25 18:02:18.638815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.421 [2024-10-25 18:02:18.638820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.421 [2024-10-25 18:02:18.638831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.421 [2024-10-25 18:02:18.638844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.421 [2024-10-25 18:02:18.638860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.421 [2024-10-25 18:02:18.638875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.421 [2024-10-25 18:02:18.638891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.421 [2024-10-25 18:02:18.638906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.421 [2024-10-25 18:02:18.638921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.421 [2024-10-25 18:02:18.638932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.421 [2024-10-25 18:02:18.638936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.421 [2024-10-25 18:02:18.638942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.421 [2024-10-25 18:02:18.638946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.421 [2024-10-25 18:02:18.638952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.421 [2024-10-25 18:02:18.638957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.421 [2024-10-25 18:02:18.638967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.421 [2024-10-25 18:02:18.638973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.638979] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.421 [2024-10-25 18:02:18.638985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.421 [2024-10-25 18:02:18.638991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.421 [2024-10-25 18:02:18.638997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.421 [2024-10-25 18:02:18.639003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.421 [2024-10-25 18:02:18.639009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.421 [2024-10-25 18:02:18.639015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.421 [2024-10-25 18:02:18.639021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.421 [2024-10-25 18:02:18.639026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.421 [2024-10-25 18:02:18.639031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.421 [2024-10-25 18:02:18.639038] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.421 [2024-10-25 18:02:18.639044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.421 [2024-10-25 18:02:18.639051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.421 [2024-10-25 18:02:18.639057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.421 [2024-10-25 18:02:18.639063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.421 [2024-10-25 18:02:18.639068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.421 [2024-10-25 18:02:18.639074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.421 [2024-10-25 18:02:18.639079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.422 [2024-10-25 18:02:18.639084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.422 [2024-10-25 18:02:18.639090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.422 [2024-10-25 18:02:18.639095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.422 [2024-10-25 18:02:18.639100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.422 [2024-10-25 18:02:18.639106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.422 [2024-10-25 18:02:18.639112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.422 [2024-10-25 18:02:18.639117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.422 [2024-10-25 18:02:18.639123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.422 [2024-10-25 18:02:18.639128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.422 [2024-10-25 18:02:18.639134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.422 [2024-10-25 18:02:18.639143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.422 [2024-10-25 18:02:18.639148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.422 [2024-10-25 18:02:18.639154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.422 [2024-10-25 18:02:18.639160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.422 [2024-10-25 18:02:18.639165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.639171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.422 [2024-10-25 18:02:18.639177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:21:00.422 [2024-10-25 18:02:18.639182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.663277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.663308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.422 [2024-10-25 18:02:18.663317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.050 ms 00:21:00.422 [2024-10-25 18:02:18.663324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.663392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.663402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:00.422 [2024-10-25 18:02:18.663409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:00.422 [2024-10-25 18:02:18.663415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.704919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.705059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.422 [2024-10-25 18:02:18.705074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.465 ms 00:21:00.422 [2024-10-25 18:02:18.705082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.705116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.705124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.422 [2024-10-25 18:02:18.705132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:00.422 [2024-10-25 18:02:18.705142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.705549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.705584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.422 [2024-10-25 18:02:18.705592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:21:00.422 [2024-10-25 18:02:18.705599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.705715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.705724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.422 [2024-10-25 18:02:18.705731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:00.422 [2024-10-25 18:02:18.705737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.717467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.717495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.422 [2024-10-25 18:02:18.717504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.710 ms 00:21:00.422 [2024-10-25 18:02:18.717510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.727701] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:00.422 [2024-10-25 18:02:18.727824] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:00.422 [2024-10-25 18:02:18.727837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.727844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:00.422 [2024-10-25 18:02:18.727852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.226 ms 00:21:00.422 [2024-10-25 18:02:18.727858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.746432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.746464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:00.422 [2024-10-25 18:02:18.746473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.544 ms 00:21:00.422 [2024-10-25 18:02:18.746480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.755442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.755474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:00.422 [2024-10-25 18:02:18.755483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.934 ms 00:21:00.422 [2024-10-25 18:02:18.755488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.764260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.764285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:00.422 [2024-10-25 18:02:18.764292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.745 ms 00:21:00.422 [2024-10-25 18:02:18.764298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.764778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.764794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:00.422 [2024-10-25 18:02:18.764802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:21:00.422 [2024-10-25 18:02:18.764808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.813184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.813217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:00.422 [2024-10-25 18:02:18.813227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.360 ms 00:21:00.422 [2024-10-25 18:02:18.813238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.821406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:00.422 [2024-10-25 18:02:18.823951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.823976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:00.422 [2024-10-25 18:02:18.823985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.681 ms 00:21:00.422 [2024-10-25 18:02:18.823992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.824051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.824061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:00.422 [2024-10-25 18:02:18.824068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:00.422 [2024-10-25 18:02:18.824074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.825465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.825493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:00.422 [2024-10-25 18:02:18.825500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:21:00.422 [2024-10-25 18:02:18.825507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.825526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.825533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:00.422 [2024-10-25 18:02:18.825540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:00.422 [2024-10-25 18:02:18.825546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.825588] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:00.422 [2024-10-25 18:02:18.825600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.825606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:00.422 [2024-10-25 18:02:18.825613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:00.422 [2024-10-25 18:02:18.825620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.843381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.843408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:00.422 [2024-10-25 18:02:18.843417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.740 ms 00:21:00.422 [2024-10-25 18:02:18.843424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.843486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.422 [2024-10-25 18:02:18.843495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:00.422 [2024-10-25 18:02:18.843503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:00.422 [2024-10-25 18:02:18.843509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.422 [2024-10-25 18:02:18.844379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.835 ms, result 0 00:21:01.796  [2024-10-25T18:02:21.163Z] Copying: 36/1024 [MB] (36 MBps) [2024-10-25T18:02:22.098Z] Copying: 93/1024 [MB] (57 MBps) [2024-10-25T18:02:23.030Z] Copying: 145/1024 [MB] (51 MBps) [2024-10-25T18:02:24.404Z] Copying: 189/1024 [MB] (44 MBps) [2024-10-25T18:02:25.338Z] Copying: 232/1024 [MB] (43 MBps) [2024-10-25T18:02:26.271Z] Copying: 285/1024 [MB] (52 MBps) [2024-10-25T18:02:27.206Z] Copying: 335/1024 [MB] (50 MBps) [2024-10-25T18:02:28.138Z] Copying: 384/1024 [MB] (48 MBps) [2024-10-25T18:02:29.071Z] Copying: 433/1024 [MB] (48 MBps) [2024-10-25T18:02:30.006Z] Copying: 477/1024 [MB] (43 MBps) [2024-10-25T18:02:31.375Z] Copying: 524/1024 [MB] (47 MBps) [2024-10-25T18:02:32.311Z] Copying: 567/1024 [MB] (43 MBps) [2024-10-25T18:02:33.254Z] Copying: 614/1024 [MB] (46 MBps) [2024-10-25T18:02:34.197Z] Copying: 657/1024 [MB] (42 MBps) [2024-10-25T18:02:35.130Z] Copying: 695/1024 [MB] (38 MBps) [2024-10-25T18:02:36.062Z] Copying: 736/1024 [MB] (40 MBps) [2024-10-25T18:02:36.995Z] Copying: 779/1024 [MB] (43 MBps) [2024-10-25T18:02:38.369Z] Copying: 821/1024 [MB] (41 MBps) [2024-10-25T18:02:39.303Z] Copying: 861/1024 [MB] (39 MBps) [2024-10-25T18:02:40.237Z] Copying: 907/1024 [MB] (45 MBps) [2024-10-25T18:02:41.170Z] Copying: 957/1024 [MB] (50 MBps) [2024-10-25T18:02:41.428Z] Copying: 1009/1024 [MB] (52 MBps) [2024-10-25T18:02:41.686Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-25 18:02:41.499410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.499490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:23.251 [2024-10-25 18:02:41.499509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:23.251 [2024-10-25 18:02:41.499521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.499578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:23.251 [2024-10-25 18:02:41.505905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.505949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:23.251 [2024-10-25 18:02:41.505964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:21:23.251 [2024-10-25 18:02:41.505977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.506314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.506336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:23.251 [2024-10-25 18:02:41.506350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:23.251 [2024-10-25 18:02:41.506362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.512200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.512394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:23.251 [2024-10-25 18:02:41.512412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.816 ms 00:21:23.251 [2024-10-25 18:02:41.512419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.518597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.518713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:23.251 [2024-10-25 18:02:41.518730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.145 ms 00:21:23.251 [2024-10-25 18:02:41.518739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.543068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.543099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:23.251 [2024-10-25 18:02:41.543110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.283 ms 00:21:23.251 [2024-10-25 18:02:41.543119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.557378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.557409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:23.251 [2024-10-25 18:02:41.557424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.225 ms 00:21:23.251 [2024-10-25 18:02:41.557432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.618492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.618543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:23.251 [2024-10-25 18:02:41.618564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.022 ms 00:21:23.251 [2024-10-25 18:02:41.618573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.641948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.641979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:23.251 [2024-10-25 18:02:41.641989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.360 ms 00:21:23.251 [2024-10-25 18:02:41.641997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.251 [2024-10-25 18:02:41.664879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.251 [2024-10-25 18:02:41.665017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:23.251 [2024-10-25 18:02:41.665044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.850 ms 00:21:23.251 [2024-10-25 18:02:41.665052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.510 [2024-10-25 18:02:41.687389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.510 [2024-10-25 18:02:41.687419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:23.510 [2024-10-25 18:02:41.687430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.308 ms 00:21:23.510 [2024-10-25 18:02:41.687437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.510 [2024-10-25 18:02:41.709221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.510 [2024-10-25 18:02:41.709249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:23.510 [2024-10-25 18:02:41.709259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.731 ms 00:21:23.510 [2024-10-25 18:02:41.709266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.510 [2024-10-25 18:02:41.709295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:23.510 [2024-10-25 18:02:41.709308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:23.510 [2024-10-25 18:02:41.709319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:23.510 [2024-10-25 18:02:41.709381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.709993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:23.511 [2024-10-25 18:02:41.710103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:23.511 [2024-10-25 18:02:41.710111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f2aaecaf-110f-466b-a18e-9eabbbdbe30d 00:21:23.511 [2024-10-25 18:02:41.710119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:23.511 [2024-10-25 18:02:41.710127] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14272 00:21:23.511 [2024-10-25 18:02:41.710138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13312 00:21:23.512 [2024-10-25 18:02:41.710146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0721 00:21:23.512 [2024-10-25 18:02:41.710153] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:23.512 [2024-10-25 18:02:41.710161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:23.512 [2024-10-25 18:02:41.710171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:23.512 [2024-10-25 18:02:41.710183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:23.512 [2024-10-25 18:02:41.710190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:23.512 [2024-10-25 18:02:41.710197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.512 [2024-10-25 18:02:41.710205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:23.512 [2024-10-25 18:02:41.710213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:21:23.512 [2024-10-25 18:02:41.710220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.723394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.512 [2024-10-25 18:02:41.723514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:23.512 [2024-10-25 18:02:41.723528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.159 ms 00:21:23.512 [2024-10-25 18:02:41.723536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.723908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.512 [2024-10-25 18:02:41.723920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:23.512 [2024-10-25 18:02:41.723928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:23.512 [2024-10-25 18:02:41.723936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.759038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.759069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.512 [2024-10-25 18:02:41.759084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.759092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.759146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.759155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.512 [2024-10-25 18:02:41.759163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.759170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.759219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.759229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.512 [2024-10-25 18:02:41.759237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.759248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.759263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.759271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.512 [2024-10-25 18:02:41.759279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.759286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.841856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.842019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.512 [2024-10-25 18:02:41.842037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.842051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.908438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.908474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.512 [2024-10-25 18:02:41.908484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.908493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.908584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.908595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.512 [2024-10-25 18:02:41.908604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.908612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.908649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.908659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.512 [2024-10-25 18:02:41.908668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.908676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.908763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.908773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.512 [2024-10-25 18:02:41.908782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.908790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.908823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.908877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:23.512 [2024-10-25 18:02:41.908888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.908896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.908934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.908943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.512 [2024-10-25 18:02:41.908952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.908959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.909012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.512 [2024-10-25 18:02:41.909023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.512 [2024-10-25 18:02:41.909032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.512 [2024-10-25 18:02:41.909040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.512 [2024-10-25 18:02:41.909156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 409.729 ms, result 0 00:21:24.447 00:21:24.447 00:21:24.447 18:02:42 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:26.979 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:26.979 18:02:44 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:26.979 18:02:44 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:26.979 18:02:44 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:26.979 Process with pid 74181 is not found 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74181 00:21:26.979 18:02:45 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74181 ']' 00:21:26.979 18:02:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74181 00:21:26.979 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74181) - No such process 00:21:26.979 18:02:45 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74181 is not found' 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:26.979 Remove shared memory files 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:26.979 18:02:45 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:26.979 ************************************ 00:21:26.979 END TEST ftl_restore 00:21:26.979 ************************************ 00:21:26.979 00:21:26.979 real 2m59.517s 00:21:26.979 user 2m48.903s 00:21:26.979 sys 0m11.969s 00:21:26.979 18:02:45 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.979 18:02:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:26.979 18:02:45 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:26.979 18:02:45 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:26.979 18:02:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:26.979 18:02:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:26.979 ************************************ 00:21:26.979 START TEST ftl_dirty_shutdown 00:21:26.979 ************************************ 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:26.979 * Looking for test storage... 00:21:26.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:26.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.979 --rc genhtml_branch_coverage=1 00:21:26.979 --rc genhtml_function_coverage=1 00:21:26.979 --rc genhtml_legend=1 00:21:26.979 --rc geninfo_all_blocks=1 00:21:26.979 --rc geninfo_unexecuted_blocks=1 00:21:26.979 00:21:26.979 ' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:26.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.979 --rc genhtml_branch_coverage=1 00:21:26.979 --rc genhtml_function_coverage=1 00:21:26.979 --rc genhtml_legend=1 00:21:26.979 --rc geninfo_all_blocks=1 00:21:26.979 --rc geninfo_unexecuted_blocks=1 00:21:26.979 00:21:26.979 ' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:26.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.979 --rc genhtml_branch_coverage=1 00:21:26.979 --rc genhtml_function_coverage=1 00:21:26.979 --rc genhtml_legend=1 00:21:26.979 --rc geninfo_all_blocks=1 00:21:26.979 --rc geninfo_unexecuted_blocks=1 00:21:26.979 00:21:26.979 ' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:26.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.979 --rc genhtml_branch_coverage=1 00:21:26.979 --rc genhtml_function_coverage=1 00:21:26.979 --rc genhtml_legend=1 00:21:26.979 --rc geninfo_all_blocks=1 00:21:26.979 --rc geninfo_unexecuted_blocks=1 00:21:26.979 00:21:26.979 ' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:26.979 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76054 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76054 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76054 ']' 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.980 18:02:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:26.980 [2024-10-25 18:02:45.332785] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:21:26.980 [2024-10-25 18:02:45.333054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76054 ] 00:21:27.259 [2024-10-25 18:02:45.492173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.260 [2024-10-25 18:02:45.602226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:27.834 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:28.091 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:28.349 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:28.349 { 00:21:28.349 "name": "nvme0n1", 00:21:28.349 "aliases": [ 00:21:28.349 "2ebb44ab-7ce5-455b-993d-5e89056bef20" 00:21:28.349 ], 00:21:28.349 "product_name": "NVMe disk", 00:21:28.349 "block_size": 4096, 00:21:28.349 "num_blocks": 1310720, 00:21:28.349 "uuid": "2ebb44ab-7ce5-455b-993d-5e89056bef20", 00:21:28.349 "numa_id": -1, 00:21:28.349 "assigned_rate_limits": { 00:21:28.349 "rw_ios_per_sec": 0, 00:21:28.349 "rw_mbytes_per_sec": 0, 00:21:28.349 "r_mbytes_per_sec": 0, 00:21:28.349 "w_mbytes_per_sec": 0 00:21:28.349 }, 00:21:28.349 "claimed": true, 00:21:28.349 "claim_type": "read_many_write_one", 00:21:28.349 "zoned": false, 00:21:28.349 "supported_io_types": { 00:21:28.349 "read": true, 00:21:28.349 "write": true, 00:21:28.349 "unmap": true, 00:21:28.349 "flush": true, 00:21:28.349 "reset": true, 00:21:28.349 "nvme_admin": true, 00:21:28.349 "nvme_io": true, 00:21:28.349 "nvme_io_md": false, 00:21:28.349 "write_zeroes": true, 00:21:28.349 "zcopy": false, 00:21:28.349 "get_zone_info": false, 00:21:28.349 "zone_management": false, 00:21:28.349 "zone_append": false, 00:21:28.349 "compare": true, 00:21:28.349 "compare_and_write": false, 00:21:28.349 "abort": true, 00:21:28.349 "seek_hole": false, 00:21:28.349 "seek_data": false, 00:21:28.349 "copy": true, 00:21:28.349 "nvme_iov_md": false 00:21:28.349 }, 00:21:28.349 "driver_specific": { 00:21:28.349 "nvme": [ 00:21:28.349 { 00:21:28.349 "pci_address": "0000:00:11.0", 00:21:28.349 "trid": { 00:21:28.349 "trtype": "PCIe", 00:21:28.349 "traddr": "0000:00:11.0" 00:21:28.349 }, 00:21:28.349 "ctrlr_data": { 00:21:28.349 "cntlid": 0, 00:21:28.349 "vendor_id": "0x1b36", 00:21:28.349 "model_number": "QEMU NVMe Ctrl", 00:21:28.349 "serial_number": "12341", 00:21:28.349 "firmware_revision": "8.0.0", 00:21:28.349 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:28.349 "oacs": { 00:21:28.349 "security": 0, 00:21:28.349 "format": 1, 00:21:28.349 "firmware": 0, 00:21:28.349 "ns_manage": 1 00:21:28.349 }, 00:21:28.349 "multi_ctrlr": false, 00:21:28.349 "ana_reporting": false 00:21:28.349 }, 00:21:28.349 "vs": { 00:21:28.349 "nvme_version": "1.4" 00:21:28.349 }, 00:21:28.349 "ns_data": { 00:21:28.349 "id": 1, 00:21:28.349 "can_share": false 00:21:28.349 } 00:21:28.349 } 00:21:28.349 ], 00:21:28.349 "mp_policy": "active_passive" 00:21:28.349 } 00:21:28.349 } 00:21:28.349 ]' 00:21:28.349 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:28.349 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:28.349 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=867d4a25-516d-4326-8393-271889abf794 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:28.607 18:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 867d4a25-516d-4326-8393-271889abf794 00:21:28.866 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:29.124 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0cba73cf-fff6-43a9-a58c-dda97cce5155 00:21:29.124 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0cba73cf-fff6-43a9-a58c-dda97cce5155 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:29.383 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:29.642 { 00:21:29.642 "name": "ad5c89ef-4569-4440-a133-9a97163d6c2a", 00:21:29.642 "aliases": [ 00:21:29.642 "lvs/nvme0n1p0" 00:21:29.642 ], 00:21:29.642 "product_name": "Logical Volume", 00:21:29.642 "block_size": 4096, 00:21:29.642 "num_blocks": 26476544, 00:21:29.642 "uuid": "ad5c89ef-4569-4440-a133-9a97163d6c2a", 00:21:29.642 "assigned_rate_limits": { 00:21:29.642 "rw_ios_per_sec": 0, 00:21:29.642 "rw_mbytes_per_sec": 0, 00:21:29.642 "r_mbytes_per_sec": 0, 00:21:29.642 "w_mbytes_per_sec": 0 00:21:29.642 }, 00:21:29.642 "claimed": false, 00:21:29.642 "zoned": false, 00:21:29.642 "supported_io_types": { 00:21:29.642 "read": true, 00:21:29.642 "write": true, 00:21:29.642 "unmap": true, 00:21:29.642 "flush": false, 00:21:29.642 "reset": true, 00:21:29.642 "nvme_admin": false, 00:21:29.642 "nvme_io": false, 00:21:29.642 "nvme_io_md": false, 00:21:29.642 "write_zeroes": true, 00:21:29.642 "zcopy": false, 00:21:29.642 "get_zone_info": false, 00:21:29.642 "zone_management": false, 00:21:29.642 "zone_append": false, 00:21:29.642 "compare": false, 00:21:29.642 "compare_and_write": false, 00:21:29.642 "abort": false, 00:21:29.642 "seek_hole": true, 00:21:29.642 "seek_data": true, 00:21:29.642 "copy": false, 00:21:29.642 "nvme_iov_md": false 00:21:29.642 }, 00:21:29.642 "driver_specific": { 00:21:29.642 "lvol": { 00:21:29.642 "lvol_store_uuid": "0cba73cf-fff6-43a9-a58c-dda97cce5155", 00:21:29.642 "base_bdev": "nvme0n1", 00:21:29.642 "thin_provision": true, 00:21:29.642 "num_allocated_clusters": 0, 00:21:29.642 "snapshot": false, 00:21:29.642 "clone": false, 00:21:29.642 "esnap_clone": false 00:21:29.642 } 00:21:29.642 } 00:21:29.642 } 00:21:29.642 ]' 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:29.642 18:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:29.901 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:30.160 { 00:21:30.160 "name": "ad5c89ef-4569-4440-a133-9a97163d6c2a", 00:21:30.160 "aliases": [ 00:21:30.160 "lvs/nvme0n1p0" 00:21:30.160 ], 00:21:30.160 "product_name": "Logical Volume", 00:21:30.160 "block_size": 4096, 00:21:30.160 "num_blocks": 26476544, 00:21:30.160 "uuid": "ad5c89ef-4569-4440-a133-9a97163d6c2a", 00:21:30.160 "assigned_rate_limits": { 00:21:30.160 "rw_ios_per_sec": 0, 00:21:30.160 "rw_mbytes_per_sec": 0, 00:21:30.160 "r_mbytes_per_sec": 0, 00:21:30.160 "w_mbytes_per_sec": 0 00:21:30.160 }, 00:21:30.160 "claimed": false, 00:21:30.160 "zoned": false, 00:21:30.160 "supported_io_types": { 00:21:30.160 "read": true, 00:21:30.160 "write": true, 00:21:30.160 "unmap": true, 00:21:30.160 "flush": false, 00:21:30.160 "reset": true, 00:21:30.160 "nvme_admin": false, 00:21:30.160 "nvme_io": false, 00:21:30.160 "nvme_io_md": false, 00:21:30.160 "write_zeroes": true, 00:21:30.160 "zcopy": false, 00:21:30.160 "get_zone_info": false, 00:21:30.160 "zone_management": false, 00:21:30.160 "zone_append": false, 00:21:30.160 "compare": false, 00:21:30.160 "compare_and_write": false, 00:21:30.160 "abort": false, 00:21:30.160 "seek_hole": true, 00:21:30.160 "seek_data": true, 00:21:30.160 "copy": false, 00:21:30.160 "nvme_iov_md": false 00:21:30.160 }, 00:21:30.160 "driver_specific": { 00:21:30.160 "lvol": { 00:21:30.160 "lvol_store_uuid": "0cba73cf-fff6-43a9-a58c-dda97cce5155", 00:21:30.160 "base_bdev": "nvme0n1", 00:21:30.160 "thin_provision": true, 00:21:30.160 "num_allocated_clusters": 0, 00:21:30.160 "snapshot": false, 00:21:30.160 "clone": false, 00:21:30.160 "esnap_clone": false 00:21:30.160 } 00:21:30.160 } 00:21:30.160 } 00:21:30.160 ]' 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:30.160 18:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad5c89ef-4569-4440-a133-9a97163d6c2a 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:30.419 { 00:21:30.419 "name": "ad5c89ef-4569-4440-a133-9a97163d6c2a", 00:21:30.419 "aliases": [ 00:21:30.419 "lvs/nvme0n1p0" 00:21:30.419 ], 00:21:30.419 "product_name": "Logical Volume", 00:21:30.419 "block_size": 4096, 00:21:30.419 "num_blocks": 26476544, 00:21:30.419 "uuid": "ad5c89ef-4569-4440-a133-9a97163d6c2a", 00:21:30.419 "assigned_rate_limits": { 00:21:30.419 "rw_ios_per_sec": 0, 00:21:30.419 "rw_mbytes_per_sec": 0, 00:21:30.419 "r_mbytes_per_sec": 0, 00:21:30.419 "w_mbytes_per_sec": 0 00:21:30.419 }, 00:21:30.419 "claimed": false, 00:21:30.419 "zoned": false, 00:21:30.419 "supported_io_types": { 00:21:30.419 "read": true, 00:21:30.419 "write": true, 00:21:30.419 "unmap": true, 00:21:30.419 "flush": false, 00:21:30.419 "reset": true, 00:21:30.419 "nvme_admin": false, 00:21:30.419 "nvme_io": false, 00:21:30.419 "nvme_io_md": false, 00:21:30.419 "write_zeroes": true, 00:21:30.419 "zcopy": false, 00:21:30.419 "get_zone_info": false, 00:21:30.419 "zone_management": false, 00:21:30.419 "zone_append": false, 00:21:30.419 "compare": false, 00:21:30.419 "compare_and_write": false, 00:21:30.419 "abort": false, 00:21:30.419 "seek_hole": true, 00:21:30.419 "seek_data": true, 00:21:30.419 "copy": false, 00:21:30.419 "nvme_iov_md": false 00:21:30.419 }, 00:21:30.419 "driver_specific": { 00:21:30.419 "lvol": { 00:21:30.419 "lvol_store_uuid": "0cba73cf-fff6-43a9-a58c-dda97cce5155", 00:21:30.419 "base_bdev": "nvme0n1", 00:21:30.419 "thin_provision": true, 00:21:30.419 "num_allocated_clusters": 0, 00:21:30.419 "snapshot": false, 00:21:30.419 "clone": false, 00:21:30.419 "esnap_clone": false 00:21:30.419 } 00:21:30.419 } 00:21:30.419 } 00:21:30.419 ]' 00:21:30.419 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:30.678 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:30.678 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ad5c89ef-4569-4440-a133-9a97163d6c2a --l2p_dram_limit 10' 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:30.679 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ad5c89ef-4569-4440-a133-9a97163d6c2a --l2p_dram_limit 10 -c nvc0n1p0 00:21:30.679 [2024-10-25 18:02:49.074995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.075161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:30.679 [2024-10-25 18:02:49.075184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:30.679 [2024-10-25 18:02:49.075192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.075255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.075264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.679 [2024-10-25 18:02:49.075273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:30.679 [2024-10-25 18:02:49.075295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.075319] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:30.679 [2024-10-25 18:02:49.075942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:30.679 [2024-10-25 18:02:49.075960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.075967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.679 [2024-10-25 18:02:49.075975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:21:30.679 [2024-10-25 18:02:49.075982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.076039] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a4c6862a-3d0b-43e3-bcc4-02b5c61c45c7 00:21:30.679 [2024-10-25 18:02:49.077302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.077335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:30.679 [2024-10-25 18:02:49.077344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:30.679 [2024-10-25 18:02:49.077353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.084120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.084149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.679 [2024-10-25 18:02:49.084157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.725 ms 00:21:30.679 [2024-10-25 18:02:49.084166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.084240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.084250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.679 [2024-10-25 18:02:49.084257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:30.679 [2024-10-25 18:02:49.084268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.084305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.084315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:30.679 [2024-10-25 18:02:49.084322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:30.679 [2024-10-25 18:02:49.084330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.084349] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:30.679 [2024-10-25 18:02:49.087567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.087592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.679 [2024-10-25 18:02:49.087603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:21:30.679 [2024-10-25 18:02:49.087612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.087640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.087647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:30.679 [2024-10-25 18:02:49.087656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:30.679 [2024-10-25 18:02:49.087661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.087682] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:30.679 [2024-10-25 18:02:49.087794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:30.679 [2024-10-25 18:02:49.087807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:30.679 [2024-10-25 18:02:49.087816] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:30.679 [2024-10-25 18:02:49.087827] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:30.679 [2024-10-25 18:02:49.087834] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:30.679 [2024-10-25 18:02:49.087842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:30.679 [2024-10-25 18:02:49.087848] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:30.679 [2024-10-25 18:02:49.087856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:30.679 [2024-10-25 18:02:49.087862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:30.679 [2024-10-25 18:02:49.087871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.087877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:30.679 [2024-10-25 18:02:49.087885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:21:30.679 [2024-10-25 18:02:49.087898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.087964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.679 [2024-10-25 18:02:49.087971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:30.679 [2024-10-25 18:02:49.087978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:30.679 [2024-10-25 18:02:49.087984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.679 [2024-10-25 18:02:49.088060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:30.679 [2024-10-25 18:02:49.088069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:30.679 [2024-10-25 18:02:49.088078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:30.679 [2024-10-25 18:02:49.088096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:30.679 [2024-10-25 18:02:49.088115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.679 [2024-10-25 18:02:49.088128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:30.679 [2024-10-25 18:02:49.088134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:30.679 [2024-10-25 18:02:49.088141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.679 [2024-10-25 18:02:49.088146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:30.679 [2024-10-25 18:02:49.088153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:30.679 [2024-10-25 18:02:49.088160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:30.679 [2024-10-25 18:02:49.088174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:30.679 [2024-10-25 18:02:49.088194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:30.679 [2024-10-25 18:02:49.088211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:30.679 [2024-10-25 18:02:49.088233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:30.679 [2024-10-25 18:02:49.088251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:30.679 [2024-10-25 18:02:49.088262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:30.679 [2024-10-25 18:02:49.088270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.679 [2024-10-25 18:02:49.088282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:30.679 [2024-10-25 18:02:49.088288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:30.679 [2024-10-25 18:02:49.088294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.679 [2024-10-25 18:02:49.088299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:30.679 [2024-10-25 18:02:49.088305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:30.679 [2024-10-25 18:02:49.088310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.679 [2024-10-25 18:02:49.088317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:30.679 [2024-10-25 18:02:49.088322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:30.679 [2024-10-25 18:02:49.088329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.680 [2024-10-25 18:02:49.088333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:30.680 [2024-10-25 18:02:49.088342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:30.680 [2024-10-25 18:02:49.088347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.680 [2024-10-25 18:02:49.088356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.680 [2024-10-25 18:02:49.088362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:30.680 [2024-10-25 18:02:49.088370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:30.680 [2024-10-25 18:02:49.088375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:30.680 [2024-10-25 18:02:49.088382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:30.680 [2024-10-25 18:02:49.088387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:30.680 [2024-10-25 18:02:49.088393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:30.680 [2024-10-25 18:02:49.088402] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:30.680 [2024-10-25 18:02:49.088411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:30.680 [2024-10-25 18:02:49.088426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:30.680 [2024-10-25 18:02:49.088431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:30.680 [2024-10-25 18:02:49.088439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:30.680 [2024-10-25 18:02:49.088444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:30.680 [2024-10-25 18:02:49.088452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:30.680 [2024-10-25 18:02:49.088458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:30.680 [2024-10-25 18:02:49.088465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:30.680 [2024-10-25 18:02:49.088470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:30.680 [2024-10-25 18:02:49.088479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:30.680 [2024-10-25 18:02:49.088509] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:30.680 [2024-10-25 18:02:49.088518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:30.680 [2024-10-25 18:02:49.088534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:30.680 [2024-10-25 18:02:49.088539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:30.680 [2024-10-25 18:02:49.088547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:30.680 [2024-10-25 18:02:49.088568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.680 [2024-10-25 18:02:49.088576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:30.680 [2024-10-25 18:02:49.088582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:21:30.680 [2024-10-25 18:02:49.088590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.680 [2024-10-25 18:02:49.088633] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:30.680 [2024-10-25 18:02:49.088645] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:34.878 [2024-10-25 18:02:52.913224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.913486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:34.878 [2024-10-25 18:02:52.913510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3824.575 ms 00:21:34.878 [2024-10-25 18:02:52.913521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.941549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.941604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.878 [2024-10-25 18:02:52.941618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.785 ms 00:21:34.878 [2024-10-25 18:02:52.941628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.941782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.941797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.878 [2024-10-25 18:02:52.941806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:34.878 [2024-10-25 18:02:52.941817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.974395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.974432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.878 [2024-10-25 18:02:52.974443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.545 ms 00:21:34.878 [2024-10-25 18:02:52.974453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.974482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.974493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.878 [2024-10-25 18:02:52.974501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:34.878 [2024-10-25 18:02:52.974514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.974957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.974976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.878 [2024-10-25 18:02:52.974985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:21:34.878 [2024-10-25 18:02:52.974995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.975095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.975113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.878 [2024-10-25 18:02:52.975121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:34.878 [2024-10-25 18:02:52.975133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:52.990515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:52.990549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.878 [2024-10-25 18:02:52.990574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.363 ms 00:21:34.878 [2024-10-25 18:02:52.990587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.002622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:34.878 [2024-10-25 18:02:53.005948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.005976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:34.878 [2024-10-25 18:02:53.005989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.290 ms 00:21:34.878 [2024-10-25 18:02:53.005997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.168687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.168728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:34.878 [2024-10-25 18:02:53.168744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 162.661 ms 00:21:34.878 [2024-10-25 18:02:53.168753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.168933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.168946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:34.878 [2024-10-25 18:02:53.168958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:34.878 [2024-10-25 18:02:53.168969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.192485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.192517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:34.878 [2024-10-25 18:02:53.192531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.472 ms 00:21:34.878 [2024-10-25 18:02:53.192539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.215590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.215627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:34.878 [2024-10-25 18:02:53.215640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.003 ms 00:21:34.878 [2024-10-25 18:02:53.215648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.216224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.216241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:34.878 [2024-10-25 18:02:53.216252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:21:34.878 [2024-10-25 18:02:53.216261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.878 [2024-10-25 18:02:53.296917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.878 [2024-10-25 18:02:53.296950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:34.878 [2024-10-25 18:02:53.296966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.623 ms 00:21:34.878 [2024-10-25 18:02:53.296974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.138 [2024-10-25 18:02:53.321589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.138 [2024-10-25 18:02:53.321619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:35.138 [2024-10-25 18:02:53.321635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.558 ms 00:21:35.138 [2024-10-25 18:02:53.321643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.138 [2024-10-25 18:02:53.344942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.138 [2024-10-25 18:02:53.344971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:35.138 [2024-10-25 18:02:53.344983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.074 ms 00:21:35.138 [2024-10-25 18:02:53.344990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.138 [2024-10-25 18:02:53.367835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.138 [2024-10-25 18:02:53.367864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.138 [2024-10-25 18:02:53.367877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.820 ms 00:21:35.138 [2024-10-25 18:02:53.367884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.138 [2024-10-25 18:02:53.367909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.138 [2024-10-25 18:02:53.367918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.138 [2024-10-25 18:02:53.367930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.138 [2024-10-25 18:02:53.367938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.138 [2024-10-25 18:02:53.368014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.138 [2024-10-25 18:02:53.368025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.138 [2024-10-25 18:02:53.368035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:35.138 [2024-10-25 18:02:53.368043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.138 [2024-10-25 18:02:53.369048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4293.610 ms, result 0 00:21:35.138 { 00:21:35.138 "name": "ftl0", 00:21:35.138 "uuid": "a4c6862a-3d0b-43e3-bcc4-02b5c61c45c7" 00:21:35.138 } 00:21:35.138 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:35.138 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:35.397 /dev/nbd0 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:35.397 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:35.656 1+0 records in 00:21:35.656 1+0 records out 00:21:35.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255574 s, 16.0 MB/s 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:21:35.656 18:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:35.656 [2024-10-25 18:02:53.904403] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:21:35.656 [2024-10-25 18:02:53.904522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76202 ] 00:21:35.656 [2024-10-25 18:02:54.066806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.915 [2024-10-25 18:02:54.162977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.291  [2024-10-25T18:02:56.660Z] Copying: 192/1024 [MB] (192 MBps) [2024-10-25T18:02:57.595Z] Copying: 430/1024 [MB] (238 MBps) [2024-10-25T18:02:58.529Z] Copying: 689/1024 [MB] (259 MBps) [2024-10-25T18:02:58.787Z] Copying: 941/1024 [MB] (252 MBps) [2024-10-25T18:02:59.353Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:21:40.918 00:21:40.918 18:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:43.489 18:03:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:43.489 [2024-10-25 18:03:01.509952] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:21:43.489 [2024-10-25 18:03:01.510075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76288 ] 00:21:43.489 [2024-10-25 18:03:01.669168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.489 [2024-10-25 18:03:01.783331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.871  [2024-10-25T18:03:04.239Z] Copying: 21/1024 [MB] (21 MBps) [2024-10-25T18:03:05.176Z] Copying: 46/1024 [MB] (25 MBps) [2024-10-25T18:03:06.112Z] Copying: 67/1024 [MB] (21 MBps) [2024-10-25T18:03:07.050Z] Copying: 91/1024 [MB] (23 MBps) [2024-10-25T18:03:08.433Z] Copying: 115/1024 [MB] (24 MBps) [2024-10-25T18:03:09.376Z] Copying: 138/1024 [MB] (22 MBps) [2024-10-25T18:03:10.320Z] Copying: 161/1024 [MB] (23 MBps) [2024-10-25T18:03:11.256Z] Copying: 186/1024 [MB] (25 MBps) [2024-10-25T18:03:12.193Z] Copying: 211/1024 [MB] (24 MBps) [2024-10-25T18:03:13.129Z] Copying: 221364224/1073741824 [B] (0 Bps) [2024-10-25T18:03:14.063Z] Copying: 237/1024 [MB] (26 MBps) [2024-10-25T18:03:15.432Z] Copying: 266/1024 [MB] (28 MBps) [2024-10-25T18:03:16.362Z] Copying: 299/1024 [MB] (32 MBps) [2024-10-25T18:03:17.294Z] Copying: 328/1024 [MB] (29 MBps) [2024-10-25T18:03:18.227Z] Copying: 356/1024 [MB] (28 MBps) [2024-10-25T18:03:19.230Z] Copying: 385/1024 [MB] (28 MBps) [2024-10-25T18:03:20.166Z] Copying: 414/1024 [MB] (29 MBps) [2024-10-25T18:03:21.101Z] Copying: 442/1024 [MB] (28 MBps) [2024-10-25T18:03:22.038Z] Copying: 473/1024 [MB] (31 MBps) [2024-10-25T18:03:23.416Z] Copying: 503/1024 [MB] (29 MBps) [2024-10-25T18:03:24.352Z] Copying: 534/1024 [MB] (31 MBps) [2024-10-25T18:03:25.288Z] Copying: 564/1024 [MB] (30 MBps) [2024-10-25T18:03:26.223Z] Copying: 594/1024 [MB] (29 MBps) [2024-10-25T18:03:27.158Z] Copying: 627/1024 [MB] (33 MBps) [2024-10-25T18:03:28.093Z] Copying: 657/1024 [MB] (29 MBps) [2024-10-25T18:03:29.030Z] Copying: 686/1024 [MB] (28 MBps) [2024-10-25T18:03:30.428Z] Copying: 714/1024 [MB] (28 MBps) [2024-10-25T18:03:31.377Z] Copying: 743/1024 [MB] (28 MBps) [2024-10-25T18:03:32.311Z] Copying: 771/1024 [MB] (28 MBps) [2024-10-25T18:03:33.245Z] Copying: 804/1024 [MB] (33 MBps) [2024-10-25T18:03:34.177Z] Copying: 834/1024 [MB] (29 MBps) [2024-10-25T18:03:35.111Z] Copying: 865/1024 [MB] (31 MBps) [2024-10-25T18:03:36.045Z] Copying: 899/1024 [MB] (34 MBps) [2024-10-25T18:03:37.421Z] Copying: 934/1024 [MB] (34 MBps) [2024-10-25T18:03:38.355Z] Copying: 966/1024 [MB] (32 MBps) [2024-10-25T18:03:38.920Z] Copying: 997/1024 [MB] (31 MBps) [2024-10-25T18:03:39.487Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:22:21.052 00:22:21.052 18:03:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:21.052 18:03:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:21.310 18:03:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:21.658 [2024-10-25 18:03:39.791677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.791748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:21.658 [2024-10-25 18:03:39.791762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.658 [2024-10-25 18:03:39.791770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.791792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.658 [2024-10-25 18:03:39.794091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.794126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:21.658 [2024-10-25 18:03:39.794136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:22:21.658 [2024-10-25 18:03:39.794143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.795791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.795821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:21.658 [2024-10-25 18:03:39.795831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:22:21.658 [2024-10-25 18:03:39.795838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.809480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.809515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:21.658 [2024-10-25 18:03:39.809527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.622 ms 00:22:21.658 [2024-10-25 18:03:39.809537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.814305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.814330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:21.658 [2024-10-25 18:03:39.814340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:22:21.658 [2024-10-25 18:03:39.814348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.834184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.834230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:21.658 [2024-10-25 18:03:39.834244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.764 ms 00:22:21.658 [2024-10-25 18:03:39.834251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.847439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.847479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:21.658 [2024-10-25 18:03:39.847492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.143 ms 00:22:21.658 [2024-10-25 18:03:39.847499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.847652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.847662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:21.658 [2024-10-25 18:03:39.847671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:21.658 [2024-10-25 18:03:39.847680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.866354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.866621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:21.658 [2024-10-25 18:03:39.866640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.656 ms 00:22:21.658 [2024-10-25 18:03:39.866646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.884436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.884477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:21.658 [2024-10-25 18:03:39.884490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.754 ms 00:22:21.658 [2024-10-25 18:03:39.884497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.901895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.902084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:21.658 [2024-10-25 18:03:39.902101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.356 ms 00:22:21.658 [2024-10-25 18:03:39.902108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.919405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.658 [2024-10-25 18:03:39.919437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:21.658 [2024-10-25 18:03:39.919447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.225 ms 00:22:21.658 [2024-10-25 18:03:39.919453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.658 [2024-10-25 18:03:39.919488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:21.658 [2024-10-25 18:03:39.919501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:21.658 [2024-10-25 18:03:39.919678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.919996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:21.659 [2024-10-25 18:03:39.920234] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:21.659 [2024-10-25 18:03:39.920242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4c6862a-3d0b-43e3-bcc4-02b5c61c45c7 00:22:21.659 [2024-10-25 18:03:39.920248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:21.659 [2024-10-25 18:03:39.920258] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:21.659 [2024-10-25 18:03:39.920264] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:21.659 [2024-10-25 18:03:39.920271] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:21.659 [2024-10-25 18:03:39.920277] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:21.659 [2024-10-25 18:03:39.920285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:21.659 [2024-10-25 18:03:39.920291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:21.659 [2024-10-25 18:03:39.920297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:21.659 [2024-10-25 18:03:39.920302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:21.659 [2024-10-25 18:03:39.920310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.659 [2024-10-25 18:03:39.920316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:21.659 [2024-10-25 18:03:39.920324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:22:21.659 [2024-10-25 18:03:39.920330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.659 [2024-10-25 18:03:39.930479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.659 [2024-10-25 18:03:39.930514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:21.659 [2024-10-25 18:03:39.930526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.119 ms 00:22:21.659 [2024-10-25 18:03:39.930536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.659 [2024-10-25 18:03:39.930859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.659 [2024-10-25 18:03:39.930868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:21.659 [2024-10-25 18:03:39.930877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:22:21.659 [2024-10-25 18:03:39.930883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.660 [2024-10-25 18:03:39.965223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.660 [2024-10-25 18:03:39.965451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.660 [2024-10-25 18:03:39.965471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.660 [2024-10-25 18:03:39.965482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.660 [2024-10-25 18:03:39.965583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.660 [2024-10-25 18:03:39.965591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.660 [2024-10-25 18:03:39.965600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.660 [2024-10-25 18:03:39.965606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.660 [2024-10-25 18:03:39.965751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.660 [2024-10-25 18:03:39.965760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.660 [2024-10-25 18:03:39.965769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.660 [2024-10-25 18:03:39.965775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.660 [2024-10-25 18:03:39.965796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.660 [2024-10-25 18:03:39.965804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.660 [2024-10-25 18:03:39.965812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.660 [2024-10-25 18:03:39.965818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.660 [2024-10-25 18:03:40.029074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.660 [2024-10-25 18:03:40.029138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.660 [2024-10-25 18:03:40.029152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.660 [2024-10-25 18:03:40.029162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.081482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.918 [2024-10-25 18:03:40.081496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.918 [2024-10-25 18:03:40.081504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.081621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.918 [2024-10-25 18:03:40.081629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.918 [2024-10-25 18:03:40.081636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.081713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.918 [2024-10-25 18:03:40.081722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.918 [2024-10-25 18:03:40.081729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.081822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.918 [2024-10-25 18:03:40.081829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.918 [2024-10-25 18:03:40.081835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.081875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:21.918 [2024-10-25 18:03:40.081883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.918 [2024-10-25 18:03:40.081890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.081936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.918 [2024-10-25 18:03:40.081944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.918 [2024-10-25 18:03:40.081950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.918 [2024-10-25 18:03:40.081997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.918 [2024-10-25 18:03:40.082005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.918 [2024-10-25 18:03:40.082013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.919 [2024-10-25 18:03:40.082019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.919 [2024-10-25 18:03:40.082138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.431 ms, result 0 00:22:21.919 true 00:22:21.919 18:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76054 00:22:21.919 18:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76054 00:22:21.919 18:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:21.919 [2024-10-25 18:03:40.179960] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:22:21.919 [2024-10-25 18:03:40.180081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76694 ] 00:22:21.919 [2024-10-25 18:03:40.337028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.177 [2024-10-25 18:03:40.438822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.552  [2024-10-25T18:03:42.922Z] Copying: 255/1024 [MB] (255 MBps) [2024-10-25T18:03:43.857Z] Copying: 512/1024 [MB] (257 MBps) [2024-10-25T18:03:44.791Z] Copying: 766/1024 [MB] (254 MBps) [2024-10-25T18:03:44.791Z] Copying: 1020/1024 [MB] (254 MBps) [2024-10-25T18:03:45.357Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:22:26.922 00:22:26.922 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76054 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:26.922 18:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:26.922 [2024-10-25 18:03:45.316829] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:22:26.922 [2024-10-25 18:03:45.316948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76752 ] 00:22:27.181 [2024-10-25 18:03:45.483663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.181 [2024-10-25 18:03:45.582205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.439 [2024-10-25 18:03:45.814199] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:27.439 [2024-10-25 18:03:45.814266] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:27.698 [2024-10-25 18:03:45.877521] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:27.698 [2024-10-25 18:03:45.877841] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:27.698 [2024-10-25 18:03:45.878062] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:27.698 [2024-10-25 18:03:46.051712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.051768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:27.698 [2024-10-25 18:03:46.051780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:27.698 [2024-10-25 18:03:46.051786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.051829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.051838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.698 [2024-10-25 18:03:46.051844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:27.698 [2024-10-25 18:03:46.051850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.051865] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:27.698 [2024-10-25 18:03:46.052374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:27.698 [2024-10-25 18:03:46.052388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.052394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.698 [2024-10-25 18:03:46.052401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:22:27.698 [2024-10-25 18:03:46.052408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.053709] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:27.698 [2024-10-25 18:03:46.063837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.063868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:27.698 [2024-10-25 18:03:46.063878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.129 ms 00:22:27.698 [2024-10-25 18:03:46.063885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.063936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.063944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:27.698 [2024-10-25 18:03:46.063951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:27.698 [2024-10-25 18:03:46.063957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.070293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.070322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.698 [2024-10-25 18:03:46.070332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.295 ms 00:22:27.698 [2024-10-25 18:03:46.070338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.070400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.070408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.698 [2024-10-25 18:03:46.070415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:27.698 [2024-10-25 18:03:46.070423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.070470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.070481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:27.698 [2024-10-25 18:03:46.070488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:27.698 [2024-10-25 18:03:46.070495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.070514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:27.698 [2024-10-25 18:03:46.073578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.073603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.698 [2024-10-25 18:03:46.073611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.069 ms 00:22:27.698 [2024-10-25 18:03:46.073617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.073642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.698 [2024-10-25 18:03:46.073650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:27.698 [2024-10-25 18:03:46.073656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:27.698 [2024-10-25 18:03:46.073664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.698 [2024-10-25 18:03:46.073690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:27.698 [2024-10-25 18:03:46.073711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:27.698 [2024-10-25 18:03:46.073741] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:27.698 [2024-10-25 18:03:46.073755] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:27.698 [2024-10-25 18:03:46.073840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:27.698 [2024-10-25 18:03:46.073849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:27.699 [2024-10-25 18:03:46.073858] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:27.699 [2024-10-25 18:03:46.073866] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:27.699 [2024-10-25 18:03:46.073876] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:27.699 [2024-10-25 18:03:46.073882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:27.699 [2024-10-25 18:03:46.073889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:27.699 [2024-10-25 18:03:46.073895] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:27.699 [2024-10-25 18:03:46.073901] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:27.699 [2024-10-25 18:03:46.073907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.699 [2024-10-25 18:03:46.073913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:27.699 [2024-10-25 18:03:46.073920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:22:27.699 [2024-10-25 18:03:46.073925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.699 [2024-10-25 18:03:46.073990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.699 [2024-10-25 18:03:46.073997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:27.699 [2024-10-25 18:03:46.074005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:27.699 [2024-10-25 18:03:46.074011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.699 [2024-10-25 18:03:46.074091] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:27.699 [2024-10-25 18:03:46.074100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:27.699 [2024-10-25 18:03:46.074107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:27.699 [2024-10-25 18:03:46.074126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:27.699 [2024-10-25 18:03:46.074144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.699 [2024-10-25 18:03:46.074156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:27.699 [2024-10-25 18:03:46.074167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:27.699 [2024-10-25 18:03:46.074172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.699 [2024-10-25 18:03:46.074177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:27.699 [2024-10-25 18:03:46.074182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:27.699 [2024-10-25 18:03:46.074189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:27.699 [2024-10-25 18:03:46.074200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:27.699 [2024-10-25 18:03:46.074215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:27.699 [2024-10-25 18:03:46.074230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:27.699 [2024-10-25 18:03:46.074245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:27.699 [2024-10-25 18:03:46.074260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:27.699 [2024-10-25 18:03:46.074275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.699 [2024-10-25 18:03:46.074284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:27.699 [2024-10-25 18:03:46.074289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:27.699 [2024-10-25 18:03:46.074294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.699 [2024-10-25 18:03:46.074299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:27.699 [2024-10-25 18:03:46.074304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:27.699 [2024-10-25 18:03:46.074309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:27.699 [2024-10-25 18:03:46.074320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:27.699 [2024-10-25 18:03:46.074326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074331] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:27.699 [2024-10-25 18:03:46.074337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:27.699 [2024-10-25 18:03:46.074342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.699 [2024-10-25 18:03:46.074357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:27.699 [2024-10-25 18:03:46.074363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:27.699 [2024-10-25 18:03:46.074368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:27.699 [2024-10-25 18:03:46.074374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:27.699 [2024-10-25 18:03:46.074379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:27.699 [2024-10-25 18:03:46.074384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:27.699 [2024-10-25 18:03:46.074391] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:27.699 [2024-10-25 18:03:46.074398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:27.699 [2024-10-25 18:03:46.074411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:27.699 [2024-10-25 18:03:46.074417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:27.699 [2024-10-25 18:03:46.074422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:27.699 [2024-10-25 18:03:46.074428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:27.699 [2024-10-25 18:03:46.074433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:27.699 [2024-10-25 18:03:46.074439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:27.699 [2024-10-25 18:03:46.074444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:27.699 [2024-10-25 18:03:46.074449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:27.699 [2024-10-25 18:03:46.074456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:27.699 [2024-10-25 18:03:46.074483] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:27.699 [2024-10-25 18:03:46.074489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:27.699 [2024-10-25 18:03:46.074502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:27.699 [2024-10-25 18:03:46.074507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:27.699 [2024-10-25 18:03:46.074513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:27.699 [2024-10-25 18:03:46.074518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.699 [2024-10-25 18:03:46.074524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:27.699 [2024-10-25 18:03:46.074530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:22:27.699 [2024-10-25 18:03:46.074535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.699 [2024-10-25 18:03:46.098992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.699 [2024-10-25 18:03:46.099032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.699 [2024-10-25 18:03:46.099041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.391 ms 00:22:27.699 [2024-10-25 18:03:46.099048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.699 [2024-10-25 18:03:46.099121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.699 [2024-10-25 18:03:46.099131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:27.699 [2024-10-25 18:03:46.099137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:27.699 [2024-10-25 18:03:46.099144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.141290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.141517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.959 [2024-10-25 18:03:46.141533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.091 ms 00:22:27.959 [2024-10-25 18:03:46.141544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.141608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.141617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.959 [2024-10-25 18:03:46.141625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:27.959 [2024-10-25 18:03:46.141631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.142086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.142105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.959 [2024-10-25 18:03:46.142112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:22:27.959 [2024-10-25 18:03:46.142119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.142238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.142247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.959 [2024-10-25 18:03:46.142254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:27.959 [2024-10-25 18:03:46.142260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.154228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.154259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.959 [2024-10-25 18:03:46.154268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.950 ms 00:22:27.959 [2024-10-25 18:03:46.154275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.164755] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:27.959 [2024-10-25 18:03:46.164788] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:27.959 [2024-10-25 18:03:46.164798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.164806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:27.959 [2024-10-25 18:03:46.164813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.415 ms 00:22:27.959 [2024-10-25 18:03:46.164819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.183970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.184142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:27.959 [2024-10-25 18:03:46.184166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.113 ms 00:22:27.959 [2024-10-25 18:03:46.184173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.193374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.193406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:27.959 [2024-10-25 18:03:46.193414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.162 ms 00:22:27.959 [2024-10-25 18:03:46.193420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.202185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.202214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:27.959 [2024-10-25 18:03:46.202222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.736 ms 00:22:27.959 [2024-10-25 18:03:46.202228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.202760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.202776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:27.959 [2024-10-25 18:03:46.202784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:22:27.959 [2024-10-25 18:03:46.202790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.959 [2024-10-25 18:03:46.251459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.959 [2024-10-25 18:03:46.251726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:27.959 [2024-10-25 18:03:46.251746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.653 ms 00:22:27.959 [2024-10-25 18:03:46.251753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.260153] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:27.960 [2024-10-25 18:03:46.262873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.263023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:27.960 [2024-10-25 18:03:46.263038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.068 ms 00:22:27.960 [2024-10-25 18:03:46.263045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.263135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.263147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:27.960 [2024-10-25 18:03:46.263155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:27.960 [2024-10-25 18:03:46.263161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.263240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.263249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:27.960 [2024-10-25 18:03:46.263256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:27.960 [2024-10-25 18:03:46.263262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.263280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.263288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:27.960 [2024-10-25 18:03:46.263297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:27.960 [2024-10-25 18:03:46.263304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.263333] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:27.960 [2024-10-25 18:03:46.263342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.263350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:27.960 [2024-10-25 18:03:46.263356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:27.960 [2024-10-25 18:03:46.263363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.281725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.281881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:27.960 [2024-10-25 18:03:46.281896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.344 ms 00:22:27.960 [2024-10-25 18:03:46.281903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.281971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.960 [2024-10-25 18:03:46.281979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:27.960 [2024-10-25 18:03:46.281987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:27.960 [2024-10-25 18:03:46.281993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.960 [2024-10-25 18:03:46.282936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.806 ms, result 0 00:22:28.894  [2024-10-25T18:03:48.702Z] Copying: 43/1024 [MB] (43 MBps) [2024-10-25T18:03:49.659Z] Copying: 93/1024 [MB] (49 MBps) [2024-10-25T18:03:50.597Z] Copying: 137/1024 [MB] (43 MBps) [2024-10-25T18:03:51.533Z] Copying: 180/1024 [MB] (43 MBps) [2024-10-25T18:03:52.466Z] Copying: 224/1024 [MB] (43 MBps) [2024-10-25T18:03:53.400Z] Copying: 267/1024 [MB] (43 MBps) [2024-10-25T18:03:54.332Z] Copying: 310/1024 [MB] (43 MBps) [2024-10-25T18:03:55.708Z] Copying: 353/1024 [MB] (42 MBps) [2024-10-25T18:03:56.642Z] Copying: 398/1024 [MB] (45 MBps) [2024-10-25T18:03:57.577Z] Copying: 443/1024 [MB] (45 MBps) [2024-10-25T18:03:58.511Z] Copying: 487/1024 [MB] (43 MBps) [2024-10-25T18:03:59.444Z] Copying: 531/1024 [MB] (43 MBps) [2024-10-25T18:04:00.378Z] Copying: 567/1024 [MB] (36 MBps) [2024-10-25T18:04:01.311Z] Copying: 604/1024 [MB] (36 MBps) [2024-10-25T18:04:02.685Z] Copying: 648/1024 [MB] (44 MBps) [2024-10-25T18:04:03.619Z] Copying: 693/1024 [MB] (44 MBps) [2024-10-25T18:04:04.596Z] Copying: 736/1024 [MB] (43 MBps) [2024-10-25T18:04:05.530Z] Copying: 781/1024 [MB] (44 MBps) [2024-10-25T18:04:06.465Z] Copying: 825/1024 [MB] (44 MBps) [2024-10-25T18:04:07.399Z] Copying: 869/1024 [MB] (43 MBps) [2024-10-25T18:04:08.332Z] Copying: 912/1024 [MB] (43 MBps) [2024-10-25T18:04:09.706Z] Copying: 957/1024 [MB] (44 MBps) [2024-10-25T18:04:10.641Z] Copying: 999/1024 [MB] (42 MBps) [2024-10-25T18:04:10.906Z] Copying: 1023/1024 [MB] (23 MBps) [2024-10-25T18:04:10.906Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-10-25 18:04:10.821026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.471 [2024-10-25 18:04:10.821166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:52.471 [2024-10-25 18:04:10.821316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:52.471 [2024-10-25 18:04:10.821327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.471 [2024-10-25 18:04:10.823898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:52.471 [2024-10-25 18:04:10.826692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.471 [2024-10-25 18:04:10.826802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:52.471 [2024-10-25 18:04:10.826849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:22:52.471 [2024-10-25 18:04:10.826867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.471 [2024-10-25 18:04:10.835595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.472 [2024-10-25 18:04:10.835691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:52.472 [2024-10-25 18:04:10.835739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.355 ms 00:22:52.472 [2024-10-25 18:04:10.835757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.472 [2024-10-25 18:04:10.851431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.472 [2024-10-25 18:04:10.851524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:52.472 [2024-10-25 18:04:10.851585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.651 ms 00:22:52.472 [2024-10-25 18:04:10.851605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.472 [2024-10-25 18:04:10.856388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.472 [2024-10-25 18:04:10.856475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:52.472 [2024-10-25 18:04:10.856526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:22:52.472 [2024-10-25 18:04:10.856544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.472 [2024-10-25 18:04:10.875279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.472 [2024-10-25 18:04:10.875400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:52.472 [2024-10-25 18:04:10.875444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.682 ms 00:22:52.472 [2024-10-25 18:04:10.875462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.472 [2024-10-25 18:04:10.886788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.472 [2024-10-25 18:04:10.886890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:52.472 [2024-10-25 18:04:10.886932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.292 ms 00:22:52.472 [2024-10-25 18:04:10.886949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.742 [2024-10-25 18:04:10.942460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.742 [2024-10-25 18:04:10.942668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:52.742 [2024-10-25 18:04:10.942714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.469 ms 00:22:52.742 [2024-10-25 18:04:10.942743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.742 [2024-10-25 18:04:10.961634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.742 [2024-10-25 18:04:10.961776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:52.742 [2024-10-25 18:04:10.961817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.867 ms 00:22:52.742 [2024-10-25 18:04:10.961834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.742 [2024-10-25 18:04:10.979293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.742 [2024-10-25 18:04:10.979410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:52.742 [2024-10-25 18:04:10.979423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.426 ms 00:22:52.742 [2024-10-25 18:04:10.979430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.742 [2024-10-25 18:04:10.996736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.742 [2024-10-25 18:04:10.996855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:52.742 [2024-10-25 18:04:10.996869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.280 ms 00:22:52.742 [2024-10-25 18:04:10.996875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.742 [2024-10-25 18:04:11.014053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.742 [2024-10-25 18:04:11.014081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:52.742 [2024-10-25 18:04:11.014090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.128 ms 00:22:52.742 [2024-10-25 18:04:11.014097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.742 [2024-10-25 18:04:11.014123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:52.742 [2024-10-25 18:04:11.014137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:22:52.742 [2024-10-25 18:04:11.014148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:52.742 [2024-10-25 18:04:11.014524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:52.743 [2024-10-25 18:04:11.014999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:52.743 [2024-10-25 18:04:11.015006] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4c6862a-3d0b-43e3-bcc4-02b5c61c45c7 00:22:52.743 [2024-10-25 18:04:11.015012] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:22:52.743 [2024-10-25 18:04:11.015018] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127424 00:22:52.743 [2024-10-25 18:04:11.015033] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:22:52.743 [2024-10-25 18:04:11.015040] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:22:52.743 [2024-10-25 18:04:11.015046] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:52.743 [2024-10-25 18:04:11.015052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:52.743 [2024-10-25 18:04:11.015059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:52.743 [2024-10-25 18:04:11.015064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:52.743 [2024-10-25 18:04:11.015069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:52.743 [2024-10-25 18:04:11.015075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.743 [2024-10-25 18:04:11.015081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:52.743 [2024-10-25 18:04:11.015088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:22:52.743 [2024-10-25 18:04:11.015096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.743 [2024-10-25 18:04:11.025188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.743 [2024-10-25 18:04:11.025217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:52.743 [2024-10-25 18:04:11.025226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.078 ms 00:22:52.743 [2024-10-25 18:04:11.025233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.743 [2024-10-25 18:04:11.025525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.743 [2024-10-25 18:04:11.025539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:52.743 [2024-10-25 18:04:11.025545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:22:52.743 [2024-10-25 18:04:11.025552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.743 [2024-10-25 18:04:11.052194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.743 [2024-10-25 18:04:11.052356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:52.743 [2024-10-25 18:04:11.052371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.052378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.052443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.052451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:52.744 [2024-10-25 18:04:11.052457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.052463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.052523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.052532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:52.744 [2024-10-25 18:04:11.052540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.052546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.052578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.052585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:52.744 [2024-10-25 18:04:11.052592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.052598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.116118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.116179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:52.744 [2024-10-25 18:04:11.116190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.116197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.167641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.167697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:52.744 [2024-10-25 18:04:11.167708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.167716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.167800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.167810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:52.744 [2024-10-25 18:04:11.167817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.167823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.167852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.167859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:52.744 [2024-10-25 18:04:11.167865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.167872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.167951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.167961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:52.744 [2024-10-25 18:04:11.167968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.167974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.167998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.168005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:52.744 [2024-10-25 18:04:11.168012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.168019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.168049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.168057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:52.744 [2024-10-25 18:04:11.168066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.168072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.168109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.744 [2024-10-25 18:04:11.168118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:52.744 [2024-10-25 18:04:11.168124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.744 [2024-10-25 18:04:11.168130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.744 [2024-10-25 18:04:11.168235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.086 ms, result 0 00:22:55.272 00:22:55.272 00:22:55.272 18:04:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:57.801 18:04:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:57.801 [2024-10-25 18:04:15.866070] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:22:57.801 [2024-10-25 18:04:15.866211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77049 ] 00:22:57.801 [2024-10-25 18:04:16.023821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.801 [2024-10-25 18:04:16.124280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.059 [2024-10-25 18:04:16.355078] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.059 [2024-10-25 18:04:16.355359] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.327 [2024-10-25 18:04:16.508223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.327 [2024-10-25 18:04:16.508268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:58.327 [2024-10-25 18:04:16.508282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.327 [2024-10-25 18:04:16.508289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.327 [2024-10-25 18:04:16.508328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.327 [2024-10-25 18:04:16.508336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.327 [2024-10-25 18:04:16.508345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:58.327 [2024-10-25 18:04:16.508351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.327 [2024-10-25 18:04:16.508364] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:58.327 [2024-10-25 18:04:16.508886] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:58.327 [2024-10-25 18:04:16.508901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.327 [2024-10-25 18:04:16.508908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.327 [2024-10-25 18:04:16.508915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:22:58.327 [2024-10-25 18:04:16.508921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.510173] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:58.328 [2024-10-25 18:04:16.520616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.520638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:58.328 [2024-10-25 18:04:16.520649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.444 ms 00:22:58.328 [2024-10-25 18:04:16.520656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.520704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.520714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:58.328 [2024-10-25 18:04:16.520721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:58.328 [2024-10-25 18:04:16.520728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.527019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.527041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.328 [2024-10-25 18:04:16.527049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:22:58.328 [2024-10-25 18:04:16.527055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.527120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.527128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.328 [2024-10-25 18:04:16.527135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:58.328 [2024-10-25 18:04:16.527141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.527180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.527190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:58.328 [2024-10-25 18:04:16.527197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.328 [2024-10-25 18:04:16.527203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.527220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:58.328 [2024-10-25 18:04:16.530453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.530473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.328 [2024-10-25 18:04:16.530481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:22:58.328 [2024-10-25 18:04:16.530489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.530522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.530529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:58.328 [2024-10-25 18:04:16.530536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:58.328 [2024-10-25 18:04:16.530542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.530575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:58.328 [2024-10-25 18:04:16.530593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:58.328 [2024-10-25 18:04:16.530623] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:58.328 [2024-10-25 18:04:16.530639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:58.328 [2024-10-25 18:04:16.530723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:58.328 [2024-10-25 18:04:16.530732] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:58.328 [2024-10-25 18:04:16.530740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:58.328 [2024-10-25 18:04:16.530748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:58.328 [2024-10-25 18:04:16.530755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:58.328 [2024-10-25 18:04:16.530762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:58.328 [2024-10-25 18:04:16.530767] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:58.328 [2024-10-25 18:04:16.530773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:58.328 [2024-10-25 18:04:16.530779] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:58.328 [2024-10-25 18:04:16.530788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.530794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:58.328 [2024-10-25 18:04:16.530801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:22:58.328 [2024-10-25 18:04:16.530806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.530869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.328 [2024-10-25 18:04:16.530875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:58.328 [2024-10-25 18:04:16.530882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:58.328 [2024-10-25 18:04:16.530889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.328 [2024-10-25 18:04:16.530974] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:58.328 [2024-10-25 18:04:16.530985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:58.328 [2024-10-25 18:04:16.530992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.328 [2024-10-25 18:04:16.530999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:58.328 [2024-10-25 18:04:16.531010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:58.328 [2024-10-25 18:04:16.531022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:58.328 [2024-10-25 18:04:16.531027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.328 [2024-10-25 18:04:16.531038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:58.328 [2024-10-25 18:04:16.531043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:58.328 [2024-10-25 18:04:16.531048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.328 [2024-10-25 18:04:16.531054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:58.328 [2024-10-25 18:04:16.531059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:58.328 [2024-10-25 18:04:16.531069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:58.328 [2024-10-25 18:04:16.531078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:58.328 [2024-10-25 18:04:16.531084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:58.328 [2024-10-25 18:04:16.531094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.328 [2024-10-25 18:04:16.531105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:58.328 [2024-10-25 18:04:16.531111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.328 [2024-10-25 18:04:16.531121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:58.328 [2024-10-25 18:04:16.531126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.328 [2024-10-25 18:04:16.531136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:58.328 [2024-10-25 18:04:16.531141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.328 [2024-10-25 18:04:16.531152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:58.328 [2024-10-25 18:04:16.531157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:58.328 [2024-10-25 18:04:16.531162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.328 [2024-10-25 18:04:16.531168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:58.329 [2024-10-25 18:04:16.531173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:58.329 [2024-10-25 18:04:16.531178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.329 [2024-10-25 18:04:16.531182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:58.329 [2024-10-25 18:04:16.531188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:58.329 [2024-10-25 18:04:16.531193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.329 [2024-10-25 18:04:16.531198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:58.329 [2024-10-25 18:04:16.531203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:58.329 [2024-10-25 18:04:16.531208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.329 [2024-10-25 18:04:16.531214] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:58.329 [2024-10-25 18:04:16.531220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:58.329 [2024-10-25 18:04:16.531225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.329 [2024-10-25 18:04:16.531232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.329 [2024-10-25 18:04:16.531238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:58.329 [2024-10-25 18:04:16.531243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:58.329 [2024-10-25 18:04:16.531249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:58.329 [2024-10-25 18:04:16.531254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:58.329 [2024-10-25 18:04:16.531259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:58.329 [2024-10-25 18:04:16.531264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:58.329 [2024-10-25 18:04:16.531271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:58.329 [2024-10-25 18:04:16.531278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:58.329 [2024-10-25 18:04:16.531290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:58.329 [2024-10-25 18:04:16.531296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:58.329 [2024-10-25 18:04:16.531301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:58.329 [2024-10-25 18:04:16.531306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:58.329 [2024-10-25 18:04:16.531311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:58.329 [2024-10-25 18:04:16.531317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:58.329 [2024-10-25 18:04:16.531322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:58.329 [2024-10-25 18:04:16.531329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:58.329 [2024-10-25 18:04:16.531335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:58.329 [2024-10-25 18:04:16.531363] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:58.329 [2024-10-25 18:04:16.531370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:58.329 [2024-10-25 18:04:16.531384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:58.329 [2024-10-25 18:04:16.531389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:58.329 [2024-10-25 18:04:16.531394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:58.329 [2024-10-25 18:04:16.531400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.531405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:58.329 [2024-10-25 18:04:16.531411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:22:58.329 [2024-10-25 18:04:16.531417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.555599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.555706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.329 [2024-10-25 18:04:16.555718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.136 ms 00:22:58.329 [2024-10-25 18:04:16.555725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.555800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.555811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.329 [2024-10-25 18:04:16.555818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:58.329 [2024-10-25 18:04:16.555824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.608857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.608890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.329 [2024-10-25 18:04:16.608899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.988 ms 00:22:58.329 [2024-10-25 18:04:16.608906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.608949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.608957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.329 [2024-10-25 18:04:16.608964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:58.329 [2024-10-25 18:04:16.608973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.609400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.609415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.329 [2024-10-25 18:04:16.609423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:22:58.329 [2024-10-25 18:04:16.609430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.609543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.609551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.329 [2024-10-25 18:04:16.609573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:58.329 [2024-10-25 18:04:16.609579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.621363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.621384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.329 [2024-10-25 18:04:16.621395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.762 ms 00:22:58.329 [2024-10-25 18:04:16.621403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.631595] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:58.329 [2024-10-25 18:04:16.631618] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:58.329 [2024-10-25 18:04:16.631628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.631635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:58.329 [2024-10-25 18:04:16.631643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.141 ms 00:22:58.329 [2024-10-25 18:04:16.631649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.650249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.650276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:58.329 [2024-10-25 18:04:16.650284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.566 ms 00:22:58.329 [2024-10-25 18:04:16.650292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.329 [2024-10-25 18:04:16.659443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.329 [2024-10-25 18:04:16.659470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:58.330 [2024-10-25 18:04:16.659478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.119 ms 00:22:58.330 [2024-10-25 18:04:16.659483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.667903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.667922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:58.330 [2024-10-25 18:04:16.667930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.393 ms 00:22:58.330 [2024-10-25 18:04:16.667936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.668422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.668432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.330 [2024-10-25 18:04:16.668444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:22:58.330 [2024-10-25 18:04:16.668450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.716113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.716160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:58.330 [2024-10-25 18:04:16.716172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.644 ms 00:22:58.330 [2024-10-25 18:04:16.716183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.724386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.330 [2024-10-25 18:04:16.726854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.726874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.330 [2024-10-25 18:04:16.726884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.627 ms 00:22:58.330 [2024-10-25 18:04:16.726896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.726993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.727002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:58.330 [2024-10-25 18:04:16.727010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:58.330 [2024-10-25 18:04:16.727016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.728471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.728492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.330 [2024-10-25 18:04:16.728500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:22:58.330 [2024-10-25 18:04:16.728507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.728530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.728537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.330 [2024-10-25 18:04:16.728545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.330 [2024-10-25 18:04:16.728551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.330 [2024-10-25 18:04:16.728593] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:58.330 [2024-10-25 18:04:16.728604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.330 [2024-10-25 18:04:16.728611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:58.330 [2024-10-25 18:04:16.728619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:58.330 [2024-10-25 18:04:16.728627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.590 [2024-10-25 18:04:16.817922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.590 [2024-10-25 18:04:16.818136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.590 [2024-10-25 18:04:16.818153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.279 ms 00:22:58.590 [2024-10-25 18:04:16.818161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.590 [2024-10-25 18:04:16.818231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.590 [2024-10-25 18:04:16.818240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.590 [2024-10-25 18:04:16.818247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:58.590 [2024-10-25 18:04:16.818253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.590 [2024-10-25 18:04:16.819167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.567 ms, result 0 00:22:59.963  [2024-10-25T18:04:18.963Z] Copying: 1416/1048576 [kB] (1416 kBps) [2024-10-25T18:04:20.336Z] Copying: 8172/1048576 [kB] (6756 kBps) [2024-10-25T18:04:21.270Z] Copying: 59/1024 [MB] (51 MBps) [2024-10-25T18:04:22.203Z] Copying: 112/1024 [MB] (53 MBps) [2024-10-25T18:04:23.136Z] Copying: 165/1024 [MB] (52 MBps) [2024-10-25T18:04:24.070Z] Copying: 217/1024 [MB] (52 MBps) [2024-10-25T18:04:25.006Z] Copying: 271/1024 [MB] (53 MBps) [2024-10-25T18:04:26.377Z] Copying: 323/1024 [MB] (52 MBps) [2024-10-25T18:04:27.310Z] Copying: 375/1024 [MB] (51 MBps) [2024-10-25T18:04:28.241Z] Copying: 425/1024 [MB] (50 MBps) [2024-10-25T18:04:29.175Z] Copying: 477/1024 [MB] (51 MBps) [2024-10-25T18:04:30.108Z] Copying: 529/1024 [MB] (52 MBps) [2024-10-25T18:04:31.040Z] Copying: 582/1024 [MB] (52 MBps) [2024-10-25T18:04:31.972Z] Copying: 635/1024 [MB] (53 MBps) [2024-10-25T18:04:33.356Z] Copying: 691/1024 [MB] (55 MBps) [2024-10-25T18:04:34.288Z] Copying: 744/1024 [MB] (53 MBps) [2024-10-25T18:04:35.221Z] Copying: 792/1024 [MB] (48 MBps) [2024-10-25T18:04:36.154Z] Copying: 833/1024 [MB] (40 MBps) [2024-10-25T18:04:37.088Z] Copying: 883/1024 [MB] (50 MBps) [2024-10-25T18:04:38.019Z] Copying: 924/1024 [MB] (41 MBps) [2024-10-25T18:04:39.387Z] Copying: 978/1024 [MB] (54 MBps) [2024-10-25T18:04:39.387Z] Copying: 1019/1024 [MB] (40 MBps) [2024-10-25T18:04:39.956Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-10-25 18:04:39.928014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.521 [2024-10-25 18:04:39.928115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.521 [2024-10-25 18:04:39.928148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.521 [2024-10-25 18:04:39.928164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.521 [2024-10-25 18:04:39.928204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.521 [2024-10-25 18:04:39.931788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.521 [2024-10-25 18:04:39.931824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.521 [2024-10-25 18:04:39.931835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:23:21.521 [2024-10-25 18:04:39.931844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.521 [2024-10-25 18:04:39.932079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.521 [2024-10-25 18:04:39.932090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.521 [2024-10-25 18:04:39.932100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:23:21.521 [2024-10-25 18:04:39.932113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.521 [2024-10-25 18:04:39.942522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.521 [2024-10-25 18:04:39.942573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.521 [2024-10-25 18:04:39.942585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.393 ms 00:23:21.521 [2024-10-25 18:04:39.942595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.521 [2024-10-25 18:04:39.948962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.522 [2024-10-25 18:04:39.948990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.522 [2024-10-25 18:04:39.949001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:23:21.522 [2024-10-25 18:04:39.949017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.779 [2024-10-25 18:04:39.973515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.779 [2024-10-25 18:04:39.973582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.779 [2024-10-25 18:04:39.973594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.448 ms 00:23:21.779 [2024-10-25 18:04:39.973603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.779 [2024-10-25 18:04:39.987497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.779 [2024-10-25 18:04:39.987528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.779 [2024-10-25 18:04:39.987539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.862 ms 00:23:21.779 [2024-10-25 18:04:39.987547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.779 [2024-10-25 18:04:39.989667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.779 [2024-10-25 18:04:39.989868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.779 [2024-10-25 18:04:39.989885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.081 ms 00:23:21.779 [2024-10-25 18:04:39.989895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.779 [2024-10-25 18:04:40.012475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.779 [2024-10-25 18:04:40.012507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:21.779 [2024-10-25 18:04:40.012519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.562 ms 00:23:21.779 [2024-10-25 18:04:40.012527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.779 [2024-10-25 18:04:40.035291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.779 [2024-10-25 18:04:40.035328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:21.779 [2024-10-25 18:04:40.035349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.716 ms 00:23:21.779 [2024-10-25 18:04:40.035357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.779 [2024-10-25 18:04:40.057435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.779 [2024-10-25 18:04:40.057470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:21.780 [2024-10-25 18:04:40.057482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.045 ms 00:23:21.780 [2024-10-25 18:04:40.057490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.780 [2024-10-25 18:04:40.079691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.780 [2024-10-25 18:04:40.079723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:21.780 [2024-10-25 18:04:40.079734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.143 ms 00:23:21.780 [2024-10-25 18:04:40.079742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.780 [2024-10-25 18:04:40.079774] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:21.780 [2024-10-25 18:04:40.079790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:21.780 [2024-10-25 18:04:40.079802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:23:21.780 [2024-10-25 18:04:40.079810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.079992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:21.780 [2024-10-25 18:04:40.080473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:21.781 [2024-10-25 18:04:40.080587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:21.781 [2024-10-25 18:04:40.080595] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4c6862a-3d0b-43e3-bcc4-02b5c61c45c7 00:23:21.781 [2024-10-25 18:04:40.080604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:23:21.781 [2024-10-25 18:04:40.080612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 138176 00:23:21.781 [2024-10-25 18:04:40.080620] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 136192 00:23:21.781 [2024-10-25 18:04:40.080630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:23:21.781 [2024-10-25 18:04:40.080641] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:21.781 [2024-10-25 18:04:40.080650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:21.781 [2024-10-25 18:04:40.080658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:21.781 [2024-10-25 18:04:40.080673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:21.781 [2024-10-25 18:04:40.080679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:21.781 [2024-10-25 18:04:40.080687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.781 [2024-10-25 18:04:40.080695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:21.781 [2024-10-25 18:04:40.080704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:23:21.781 [2024-10-25 18:04:40.080712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.093655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.781 [2024-10-25 18:04:40.093685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:21.781 [2024-10-25 18:04:40.093700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.927 ms 00:23:21.781 [2024-10-25 18:04:40.093714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.094070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.781 [2024-10-25 18:04:40.094080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:21.781 [2024-10-25 18:04:40.094088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:23:21.781 [2024-10-25 18:04:40.094096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.128233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.781 [2024-10-25 18:04:40.128276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.781 [2024-10-25 18:04:40.128287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.781 [2024-10-25 18:04:40.128296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.128365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.781 [2024-10-25 18:04:40.128374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.781 [2024-10-25 18:04:40.128382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.781 [2024-10-25 18:04:40.128390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.128453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.781 [2024-10-25 18:04:40.128468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.781 [2024-10-25 18:04:40.128477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.781 [2024-10-25 18:04:40.128485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.128500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.781 [2024-10-25 18:04:40.128508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.781 [2024-10-25 18:04:40.128517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.781 [2024-10-25 18:04:40.128524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.781 [2024-10-25 18:04:40.207163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.781 [2024-10-25 18:04:40.207419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.781 [2024-10-25 18:04:40.207437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.781 [2024-10-25 18:04:40.207446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.038 [2024-10-25 18:04:40.272121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.038 [2024-10-25 18:04:40.272236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.038 [2024-10-25 18:04:40.272300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.038 [2024-10-25 18:04:40.272415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:22.038 [2024-10-25 18:04:40.272473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.038 [2024-10-25 18:04:40.272536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.038 [2024-10-25 18:04:40.272619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.038 [2024-10-25 18:04:40.272628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.038 [2024-10-25 18:04:40.272636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.038 [2024-10-25 18:04:40.272757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.730 ms, result 0 00:23:24.565 00:23:24.565 00:23:24.565 18:04:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:26.464 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:26.464 18:04:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:26.464 [2024-10-25 18:04:44.513736] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:23:26.464 [2024-10-25 18:04:44.513833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77335 ] 00:23:26.464 [2024-10-25 18:04:44.667100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.464 [2024-10-25 18:04:44.784269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.722 [2024-10-25 18:04:45.061684] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.722 [2024-10-25 18:04:45.061766] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.981 [2024-10-25 18:04:45.217352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.217589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:26.981 [2024-10-25 18:04:45.217616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:26.981 [2024-10-25 18:04:45.217626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.217689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.217700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.981 [2024-10-25 18:04:45.217721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:26.981 [2024-10-25 18:04:45.217729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.217750] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:26.981 [2024-10-25 18:04:45.218460] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:26.981 [2024-10-25 18:04:45.218478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.218487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.981 [2024-10-25 18:04:45.218496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:23:26.981 [2024-10-25 18:04:45.218504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.220015] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:26.981 [2024-10-25 18:04:45.233040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.233227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:26.981 [2024-10-25 18:04:45.233247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.026 ms 00:23:26.981 [2024-10-25 18:04:45.233256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.233323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.233336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:26.981 [2024-10-25 18:04:45.233344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:26.981 [2024-10-25 18:04:45.233352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.240375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.240546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.981 [2024-10-25 18:04:45.240575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.958 ms 00:23:26.981 [2024-10-25 18:04:45.240584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.240674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.240684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.981 [2024-10-25 18:04:45.240693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:26.981 [2024-10-25 18:04:45.240701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.240763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.240773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:26.981 [2024-10-25 18:04:45.240781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:26.981 [2024-10-25 18:04:45.240789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.240815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:26.981 [2024-10-25 18:04:45.244528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.244570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.981 [2024-10-25 18:04:45.244581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.721 ms 00:23:26.981 [2024-10-25 18:04:45.244593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.244624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.244632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:26.981 [2024-10-25 18:04:45.244641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:26.981 [2024-10-25 18:04:45.244649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.244672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:26.981 [2024-10-25 18:04:45.244693] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:26.981 [2024-10-25 18:04:45.244732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:26.981 [2024-10-25 18:04:45.244750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:26.981 [2024-10-25 18:04:45.244859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:26.981 [2024-10-25 18:04:45.244871] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:26.981 [2024-10-25 18:04:45.244881] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:26.981 [2024-10-25 18:04:45.244894] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:26.981 [2024-10-25 18:04:45.244903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:26.981 [2024-10-25 18:04:45.244911] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:26.981 [2024-10-25 18:04:45.244920] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:26.981 [2024-10-25 18:04:45.244927] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:26.981 [2024-10-25 18:04:45.244934] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:26.981 [2024-10-25 18:04:45.244946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.244954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:26.981 [2024-10-25 18:04:45.244962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:26.981 [2024-10-25 18:04:45.244970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.245076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.981 [2024-10-25 18:04:45.245092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:26.981 [2024-10-25 18:04:45.245100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:26.981 [2024-10-25 18:04:45.245108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.981 [2024-10-25 18:04:45.245221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:26.981 [2024-10-25 18:04:45.245235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:26.981 [2024-10-25 18:04:45.245243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:26.981 [2024-10-25 18:04:45.245266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:26.981 [2024-10-25 18:04:45.245288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.981 [2024-10-25 18:04:45.245301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:26.981 [2024-10-25 18:04:45.245309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:26.981 [2024-10-25 18:04:45.245315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.981 [2024-10-25 18:04:45.245322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:26.981 [2024-10-25 18:04:45.245329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:26.981 [2024-10-25 18:04:45.245344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:26.981 [2024-10-25 18:04:45.245362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:26.981 [2024-10-25 18:04:45.245383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:26.981 [2024-10-25 18:04:45.245404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:26.981 [2024-10-25 18:04:45.245424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:26.981 [2024-10-25 18:04:45.245445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.981 [2024-10-25 18:04:45.245459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:26.981 [2024-10-25 18:04:45.245466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:26.981 [2024-10-25 18:04:45.245474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.981 [2024-10-25 18:04:45.245483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:26.982 [2024-10-25 18:04:45.245490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:26.982 [2024-10-25 18:04:45.245497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.982 [2024-10-25 18:04:45.245504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:26.982 [2024-10-25 18:04:45.245510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:26.982 [2024-10-25 18:04:45.245517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.982 [2024-10-25 18:04:45.245524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:26.982 [2024-10-25 18:04:45.245530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:26.982 [2024-10-25 18:04:45.245537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.982 [2024-10-25 18:04:45.245544] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:26.982 [2024-10-25 18:04:45.245573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:26.982 [2024-10-25 18:04:45.245582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.982 [2024-10-25 18:04:45.245590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.982 [2024-10-25 18:04:45.245598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:26.982 [2024-10-25 18:04:45.245607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:26.982 [2024-10-25 18:04:45.245614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:26.982 [2024-10-25 18:04:45.245623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:26.982 [2024-10-25 18:04:45.245630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:26.982 [2024-10-25 18:04:45.245637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:26.982 [2024-10-25 18:04:45.245647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:26.982 [2024-10-25 18:04:45.245660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:26.982 [2024-10-25 18:04:45.245677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:26.982 [2024-10-25 18:04:45.245684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:26.982 [2024-10-25 18:04:45.245692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:26.982 [2024-10-25 18:04:45.245700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:26.982 [2024-10-25 18:04:45.245715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:26.982 [2024-10-25 18:04:45.245722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:26.982 [2024-10-25 18:04:45.245730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:26.982 [2024-10-25 18:04:45.245737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:26.982 [2024-10-25 18:04:45.245745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:26.982 [2024-10-25 18:04:45.245782] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:26.982 [2024-10-25 18:04:45.245791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:26.982 [2024-10-25 18:04:45.245810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:26.982 [2024-10-25 18:04:45.245817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:26.982 [2024-10-25 18:04:45.245825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:26.982 [2024-10-25 18:04:45.245833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.245840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:26.982 [2024-10-25 18:04:45.245848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:23:26.982 [2024-10-25 18:04:45.245857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.275193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.275363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.982 [2024-10-25 18:04:45.275380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.289 ms 00:23:26.982 [2024-10-25 18:04:45.275389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.275485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.275499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:26.982 [2024-10-25 18:04:45.275508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:26.982 [2024-10-25 18:04:45.275515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.319533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.319597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.982 [2024-10-25 18:04:45.319611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.931 ms 00:23:26.982 [2024-10-25 18:04:45.319621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.319684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.319694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.982 [2024-10-25 18:04:45.319704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:26.982 [2024-10-25 18:04:45.319715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.320206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.320231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.982 [2024-10-25 18:04:45.320242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:23:26.982 [2024-10-25 18:04:45.320251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.320396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.320406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.982 [2024-10-25 18:04:45.320414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:23:26.982 [2024-10-25 18:04:45.320422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.334770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.334803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.982 [2024-10-25 18:04:45.334814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.323 ms 00:23:26.982 [2024-10-25 18:04:45.334824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.347711] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:26.982 [2024-10-25 18:04:45.347746] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:26.982 [2024-10-25 18:04:45.347758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.347767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:26.982 [2024-10-25 18:04:45.347777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.823 ms 00:23:26.982 [2024-10-25 18:04:45.347785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.372636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.372682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:26.982 [2024-10-25 18:04:45.372694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.808 ms 00:23:26.982 [2024-10-25 18:04:45.372702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.384456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.384489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:26.982 [2024-10-25 18:04:45.384499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:23:26.982 [2024-10-25 18:04:45.384508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.396181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.396213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:26.982 [2024-10-25 18:04:45.396223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.641 ms 00:23:26.982 [2024-10-25 18:04:45.396231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.982 [2024-10-25 18:04:45.396872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.982 [2024-10-25 18:04:45.396894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:26.982 [2024-10-25 18:04:45.396904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:23:26.982 [2024-10-25 18:04:45.396912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.240 [2024-10-25 18:04:45.456096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.240 [2024-10-25 18:04:45.456161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:27.240 [2024-10-25 18:04:45.456175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.160 ms 00:23:27.240 [2024-10-25 18:04:45.456188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.240 [2024-10-25 18:04:45.467326] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:27.240 [2024-10-25 18:04:45.470404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.240 [2024-10-25 18:04:45.470632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:27.240 [2024-10-25 18:04:45.470651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.158 ms 00:23:27.240 [2024-10-25 18:04:45.470660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.240 [2024-10-25 18:04:45.470786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.240 [2024-10-25 18:04:45.470798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:27.240 [2024-10-25 18:04:45.470808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:27.240 [2024-10-25 18:04:45.470816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.240 [2024-10-25 18:04:45.471515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.240 [2024-10-25 18:04:45.471550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:27.240 [2024-10-25 18:04:45.471571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:23:27.240 [2024-10-25 18:04:45.471579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.240 [2024-10-25 18:04:45.471607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.240 [2024-10-25 18:04:45.471617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:27.240 [2024-10-25 18:04:45.471625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:27.240 [2024-10-25 18:04:45.471633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.241 [2024-10-25 18:04:45.471671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:27.241 [2024-10-25 18:04:45.471684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.241 [2024-10-25 18:04:45.471693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:27.241 [2024-10-25 18:04:45.471701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:27.241 [2024-10-25 18:04:45.471709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.241 [2024-10-25 18:04:45.496025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.241 [2024-10-25 18:04:45.496063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:27.241 [2024-10-25 18:04:45.496075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.298 ms 00:23:27.241 [2024-10-25 18:04:45.496084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.241 [2024-10-25 18:04:45.496161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.241 [2024-10-25 18:04:45.496171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:27.241 [2024-10-25 18:04:45.496180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:27.241 [2024-10-25 18:04:45.496188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.241 [2024-10-25 18:04:45.497255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.447 ms, result 0 00:23:28.615  [2024-10-25T18:04:47.997Z] Copying: 46/1024 [MB] (46 MBps) [2024-10-25T18:04:48.963Z] Copying: 95/1024 [MB] (49 MBps) [2024-10-25T18:04:49.896Z] Copying: 142/1024 [MB] (47 MBps) [2024-10-25T18:04:50.831Z] Copying: 193/1024 [MB] (50 MBps) [2024-10-25T18:04:51.763Z] Copying: 243/1024 [MB] (50 MBps) [2024-10-25T18:04:52.696Z] Copying: 292/1024 [MB] (49 MBps) [2024-10-25T18:04:54.070Z] Copying: 342/1024 [MB] (49 MBps) [2024-10-25T18:04:55.006Z] Copying: 391/1024 [MB] (48 MBps) [2024-10-25T18:04:55.940Z] Copying: 440/1024 [MB] (49 MBps) [2024-10-25T18:04:56.874Z] Copying: 487/1024 [MB] (47 MBps) [2024-10-25T18:04:57.808Z] Copying: 542/1024 [MB] (54 MBps) [2024-10-25T18:04:58.743Z] Copying: 592/1024 [MB] (49 MBps) [2024-10-25T18:04:59.678Z] Copying: 640/1024 [MB] (47 MBps) [2024-10-25T18:05:01.051Z] Copying: 688/1024 [MB] (48 MBps) [2024-10-25T18:05:01.987Z] Copying: 738/1024 [MB] (49 MBps) [2024-10-25T18:05:02.922Z] Copying: 784/1024 [MB] (46 MBps) [2024-10-25T18:05:03.854Z] Copying: 834/1024 [MB] (49 MBps) [2024-10-25T18:05:04.790Z] Copying: 884/1024 [MB] (50 MBps) [2024-10-25T18:05:05.724Z] Copying: 936/1024 [MB] (51 MBps) [2024-10-25T18:05:06.754Z] Copying: 985/1024 [MB] (49 MBps) [2024-10-25T18:05:06.754Z] Copying: 1024/1024 [MB] (average 49 MBps)[2024-10-25 18:05:06.643149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.319 [2024-10-25 18:05:06.643254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:48.319 [2024-10-25 18:05:06.643276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:48.319 [2024-10-25 18:05:06.643288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.319 [2024-10-25 18:05:06.643320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:48.319 [2024-10-25 18:05:06.647331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.319 [2024-10-25 18:05:06.647382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:48.319 [2024-10-25 18:05:06.647398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.989 ms 00:23:48.319 [2024-10-25 18:05:06.647417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.319 [2024-10-25 18:05:06.647754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.319 [2024-10-25 18:05:06.647771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:48.319 [2024-10-25 18:05:06.647785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:48.319 [2024-10-25 18:05:06.647797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.319 [2024-10-25 18:05:06.655046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.319 [2024-10-25 18:05:06.655080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:48.319 [2024-10-25 18:05:06.655090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.228 ms 00:23:48.319 [2024-10-25 18:05:06.655099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.319 [2024-10-25 18:05:06.661342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.319 [2024-10-25 18:05:06.661378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:48.319 [2024-10-25 18:05:06.661388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:23:48.319 [2024-10-25 18:05:06.661395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.319 [2024-10-25 18:05:06.686178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.319 [2024-10-25 18:05:06.686213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:48.319 [2024-10-25 18:05:06.686224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:23:48.320 [2024-10-25 18:05:06.686233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.320 [2024-10-25 18:05:06.700299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.320 [2024-10-25 18:05:06.700537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:48.320 [2024-10-25 18:05:06.700575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.045 ms 00:23:48.320 [2024-10-25 18:05:06.700585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.320 [2024-10-25 18:05:06.702445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.320 [2024-10-25 18:05:06.702478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:48.320 [2024-10-25 18:05:06.702494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:23:48.320 [2024-10-25 18:05:06.702502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.320 [2024-10-25 18:05:06.725798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.320 [2024-10-25 18:05:06.725830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:48.320 [2024-10-25 18:05:06.725840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.280 ms 00:23:48.320 [2024-10-25 18:05:06.725847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.320 [2024-10-25 18:05:06.748669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.320 [2024-10-25 18:05:06.748720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:48.320 [2024-10-25 18:05:06.748730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:23:48.320 [2024-10-25 18:05:06.748738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.580 [2024-10-25 18:05:06.771518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.580 [2024-10-25 18:05:06.771716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:48.580 [2024-10-25 18:05:06.771734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.761 ms 00:23:48.580 [2024-10-25 18:05:06.771743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.580 [2024-10-25 18:05:06.793796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.580 [2024-10-25 18:05:06.793827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:48.580 [2024-10-25 18:05:06.793838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.005 ms 00:23:48.580 [2024-10-25 18:05:06.793845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.580 [2024-10-25 18:05:06.793866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:48.580 [2024-10-25 18:05:06.793881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:48.580 [2024-10-25 18:05:06.793897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:23:48.580 [2024-10-25 18:05:06.793906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.793994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:48.580 [2024-10-25 18:05:06.794404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:48.581 [2024-10-25 18:05:06.794725] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:48.581 [2024-10-25 18:05:06.794733] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4c6862a-3d0b-43e3-bcc4-02b5c61c45c7 00:23:48.581 [2024-10-25 18:05:06.794744] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:23:48.581 [2024-10-25 18:05:06.794752] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:48.581 [2024-10-25 18:05:06.794759] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:48.581 [2024-10-25 18:05:06.794767] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:48.581 [2024-10-25 18:05:06.794774] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:48.581 [2024-10-25 18:05:06.794793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:48.581 [2024-10-25 18:05:06.794808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:48.581 [2024-10-25 18:05:06.794815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:48.581 [2024-10-25 18:05:06.794821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:48.581 [2024-10-25 18:05:06.794828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.581 [2024-10-25 18:05:06.794836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:48.581 [2024-10-25 18:05:06.794845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:23:48.581 [2024-10-25 18:05:06.794853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.807667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.581 [2024-10-25 18:05:06.807697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:48.581 [2024-10-25 18:05:06.807708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.798 ms 00:23:48.581 [2024-10-25 18:05:06.807717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.808075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.581 [2024-10-25 18:05:06.808085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:48.581 [2024-10-25 18:05:06.808093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:23:48.581 [2024-10-25 18:05:06.808105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.842153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.842359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.581 [2024-10-25 18:05:06.842377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.842386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.842454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.842463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.581 [2024-10-25 18:05:06.842471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.842482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.842549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.842577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.581 [2024-10-25 18:05:06.842586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.842595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.842610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.842619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.581 [2024-10-25 18:05:06.842627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.842635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.921932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.921988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.581 [2024-10-25 18:05:06.922001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.922009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.986702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.986758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.581 [2024-10-25 18:05:06.986771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.986785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.986868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.986878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.581 [2024-10-25 18:05:06.986887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.986894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.986930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.986939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.581 [2024-10-25 18:05:06.986947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.986955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.987048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.987058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.581 [2024-10-25 18:05:06.987067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.987074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.987103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.987112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:48.581 [2024-10-25 18:05:06.987121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.987129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.987167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.987178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.581 [2024-10-25 18:05:06.987186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.987194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.987235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.581 [2024-10-25 18:05:06.987245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.581 [2024-10-25 18:05:06.987253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.581 [2024-10-25 18:05:06.987262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.581 [2024-10-25 18:05:06.987385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.215 ms, result 0 00:23:49.516 00:23:49.516 00:23:49.516 18:05:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:51.427 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:51.427 Process with pid 76054 is not found 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76054 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76054 ']' 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76054 00:23:51.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76054) - No such process 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76054 is not found' 00:23:51.427 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:23:51.685 Remove shared memory files 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:51.685 ************************************ 00:23:51.685 END TEST ftl_dirty_shutdown 00:23:51.685 ************************************ 00:23:51.685 00:23:51.685 real 2m24.844s 00:23:51.685 user 2m44.021s 00:23:51.685 sys 0m23.657s 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.685 18:05:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.685 18:05:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:51.685 18:05:09 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:51.685 18:05:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.685 18:05:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:51.685 ************************************ 00:23:51.685 START TEST ftl_upgrade_shutdown 00:23:51.685 ************************************ 00:23:51.685 18:05:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:51.685 * Looking for test storage... 00:23:51.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:51.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.685 --rc genhtml_branch_coverage=1 00:23:51.685 --rc genhtml_function_coverage=1 00:23:51.685 --rc genhtml_legend=1 00:23:51.685 --rc geninfo_all_blocks=1 00:23:51.685 --rc geninfo_unexecuted_blocks=1 00:23:51.685 00:23:51.685 ' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:51.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.685 --rc genhtml_branch_coverage=1 00:23:51.685 --rc genhtml_function_coverage=1 00:23:51.685 --rc genhtml_legend=1 00:23:51.685 --rc geninfo_all_blocks=1 00:23:51.685 --rc geninfo_unexecuted_blocks=1 00:23:51.685 00:23:51.685 ' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:51.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.685 --rc genhtml_branch_coverage=1 00:23:51.685 --rc genhtml_function_coverage=1 00:23:51.685 --rc genhtml_legend=1 00:23:51.685 --rc geninfo_all_blocks=1 00:23:51.685 --rc geninfo_unexecuted_blocks=1 00:23:51.685 00:23:51.685 ' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:51.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.685 --rc genhtml_branch_coverage=1 00:23:51.685 --rc genhtml_function_coverage=1 00:23:51.685 --rc genhtml_legend=1 00:23:51.685 --rc geninfo_all_blocks=1 00:23:51.685 --rc geninfo_unexecuted_blocks=1 00:23:51.685 00:23:51.685 ' 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:51.685 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77668 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77668 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77668 ']' 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.944 18:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.944 [2024-10-25 18:05:10.216122] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:23:51.944 [2024-10-25 18:05:10.216411] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77668 ] 00:23:51.944 [2024-10-25 18:05:10.376787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.202 [2024-10-25 18:05:10.491398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:52.771 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:53.030 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:53.289 { 00:23:53.289 "name": "basen1", 00:23:53.289 "aliases": [ 00:23:53.289 "5b289775-7dd2-4cb3-bc5a-26102e0118a3" 00:23:53.289 ], 00:23:53.289 "product_name": "NVMe disk", 00:23:53.289 "block_size": 4096, 00:23:53.289 "num_blocks": 1310720, 00:23:53.289 "uuid": "5b289775-7dd2-4cb3-bc5a-26102e0118a3", 00:23:53.289 "numa_id": -1, 00:23:53.289 "assigned_rate_limits": { 00:23:53.289 "rw_ios_per_sec": 0, 00:23:53.289 "rw_mbytes_per_sec": 0, 00:23:53.289 "r_mbytes_per_sec": 0, 00:23:53.289 "w_mbytes_per_sec": 0 00:23:53.289 }, 00:23:53.289 "claimed": true, 00:23:53.289 "claim_type": "read_many_write_one", 00:23:53.289 "zoned": false, 00:23:53.289 "supported_io_types": { 00:23:53.289 "read": true, 00:23:53.289 "write": true, 00:23:53.289 "unmap": true, 00:23:53.289 "flush": true, 00:23:53.289 "reset": true, 00:23:53.289 "nvme_admin": true, 00:23:53.289 "nvme_io": true, 00:23:53.289 "nvme_io_md": false, 00:23:53.289 "write_zeroes": true, 00:23:53.289 "zcopy": false, 00:23:53.289 "get_zone_info": false, 00:23:53.289 "zone_management": false, 00:23:53.289 "zone_append": false, 00:23:53.289 "compare": true, 00:23:53.289 "compare_and_write": false, 00:23:53.289 "abort": true, 00:23:53.289 "seek_hole": false, 00:23:53.289 "seek_data": false, 00:23:53.289 "copy": true, 00:23:53.289 "nvme_iov_md": false 00:23:53.289 }, 00:23:53.289 "driver_specific": { 00:23:53.289 "nvme": [ 00:23:53.289 { 00:23:53.289 "pci_address": "0000:00:11.0", 00:23:53.289 "trid": { 00:23:53.289 "trtype": "PCIe", 00:23:53.289 "traddr": "0000:00:11.0" 00:23:53.289 }, 00:23:53.289 "ctrlr_data": { 00:23:53.289 "cntlid": 0, 00:23:53.289 "vendor_id": "0x1b36", 00:23:53.289 "model_number": "QEMU NVMe Ctrl", 00:23:53.289 "serial_number": "12341", 00:23:53.289 "firmware_revision": "8.0.0", 00:23:53.289 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:53.289 "oacs": { 00:23:53.289 "security": 0, 00:23:53.289 "format": 1, 00:23:53.289 "firmware": 0, 00:23:53.289 "ns_manage": 1 00:23:53.289 }, 00:23:53.289 "multi_ctrlr": false, 00:23:53.289 "ana_reporting": false 00:23:53.289 }, 00:23:53.289 "vs": { 00:23:53.289 "nvme_version": "1.4" 00:23:53.289 }, 00:23:53.289 "ns_data": { 00:23:53.289 "id": 1, 00:23:53.289 "can_share": false 00:23:53.289 } 00:23:53.289 } 00:23:53.289 ], 00:23:53.289 "mp_policy": "active_passive" 00:23:53.289 } 00:23:53.289 } 00:23:53.289 ]' 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:53.289 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:53.548 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0cba73cf-fff6-43a9-a58c-dda97cce5155 00:23:53.548 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:53.548 18:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cba73cf-fff6-43a9-a58c-dda97cce5155 00:23:53.806 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:23:54.065 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ab3c5e56-3f75-4f17-b703-aaef41c1f280 00:23:54.065 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ab3c5e56-3f75-4f17-b703-aaef41c1f280 00:23:54.322 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 00:23:54.322 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 ]] 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 5120 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:54.323 { 00:23:54.323 "name": "2c537e2a-9d33-4f8a-abb0-93dbdd1520e6", 00:23:54.323 "aliases": [ 00:23:54.323 "lvs/basen1p0" 00:23:54.323 ], 00:23:54.323 "product_name": "Logical Volume", 00:23:54.323 "block_size": 4096, 00:23:54.323 "num_blocks": 5242880, 00:23:54.323 "uuid": "2c537e2a-9d33-4f8a-abb0-93dbdd1520e6", 00:23:54.323 "assigned_rate_limits": { 00:23:54.323 "rw_ios_per_sec": 0, 00:23:54.323 "rw_mbytes_per_sec": 0, 00:23:54.323 "r_mbytes_per_sec": 0, 00:23:54.323 "w_mbytes_per_sec": 0 00:23:54.323 }, 00:23:54.323 "claimed": false, 00:23:54.323 "zoned": false, 00:23:54.323 "supported_io_types": { 00:23:54.323 "read": true, 00:23:54.323 "write": true, 00:23:54.323 "unmap": true, 00:23:54.323 "flush": false, 00:23:54.323 "reset": true, 00:23:54.323 "nvme_admin": false, 00:23:54.323 "nvme_io": false, 00:23:54.323 "nvme_io_md": false, 00:23:54.323 "write_zeroes": true, 00:23:54.323 "zcopy": false, 00:23:54.323 "get_zone_info": false, 00:23:54.323 "zone_management": false, 00:23:54.323 "zone_append": false, 00:23:54.323 "compare": false, 00:23:54.323 "compare_and_write": false, 00:23:54.323 "abort": false, 00:23:54.323 "seek_hole": true, 00:23:54.323 "seek_data": true, 00:23:54.323 "copy": false, 00:23:54.323 "nvme_iov_md": false 00:23:54.323 }, 00:23:54.323 "driver_specific": { 00:23:54.323 "lvol": { 00:23:54.323 "lvol_store_uuid": "ab3c5e56-3f75-4f17-b703-aaef41c1f280", 00:23:54.323 "base_bdev": "basen1", 00:23:54.323 "thin_provision": true, 00:23:54.323 "num_allocated_clusters": 0, 00:23:54.323 "snapshot": false, 00:23:54.323 "clone": false, 00:23:54.323 "esnap_clone": false 00:23:54.323 } 00:23:54.323 } 00:23:54.323 } 00:23:54.323 ]' 00:23:54.323 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:54.580 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:54.580 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:54.580 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:23:54.580 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:23:54.580 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:23:54.580 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:23:54.581 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:54.581 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:23:54.839 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:23:54.839 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:23:54.839 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:23:54.839 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:23:54.839 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:23:54.839 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2c537e2a-9d33-4f8a-abb0-93dbdd1520e6 -c cachen1p0 --l2p_dram_limit 2 00:23:55.098 [2024-10-25 18:05:13.400649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.400884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:55.098 [2024-10-25 18:05:13.400907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:55.098 [2024-10-25 18:05:13.400915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.400984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.400992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:55.098 [2024-10-25 18:05:13.401001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:23:55.098 [2024-10-25 18:05:13.401007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.401027] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:55.098 [2024-10-25 18:05:13.401654] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:55.098 [2024-10-25 18:05:13.401672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.401678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:55.098 [2024-10-25 18:05:13.401686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.647 ms 00:23:55.098 [2024-10-25 18:05:13.401693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.401787] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID bb14d880-e949-4bda-a944-28dc5de98510 00:23:55.098 [2024-10-25 18:05:13.403161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.403194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:23:55.098 [2024-10-25 18:05:13.403203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:23:55.098 [2024-10-25 18:05:13.403212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.410315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.410347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:55.098 [2024-10-25 18:05:13.410356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.066 ms 00:23:55.098 [2024-10-25 18:05:13.410367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.410401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.410414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:55.098 [2024-10-25 18:05:13.410421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:23:55.098 [2024-10-25 18:05:13.410431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.410480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.410492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:55.098 [2024-10-25 18:05:13.410499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:55.098 [2024-10-25 18:05:13.410507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.410528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:55.098 [2024-10-25 18:05:13.413847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.413872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:55.098 [2024-10-25 18:05:13.413882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.325 ms 00:23:55.098 [2024-10-25 18:05:13.413894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.413917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.413925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:55.098 [2024-10-25 18:05:13.413933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:55.098 [2024-10-25 18:05:13.413939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.098 [2024-10-25 18:05:13.413954] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:23:55.098 [2024-10-25 18:05:13.414064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:55.098 [2024-10-25 18:05:13.414077] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:55.098 [2024-10-25 18:05:13.414088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:55.098 [2024-10-25 18:05:13.414098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:55.098 [2024-10-25 18:05:13.414105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:55.098 [2024-10-25 18:05:13.414113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:55.098 [2024-10-25 18:05:13.414120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:55.098 [2024-10-25 18:05:13.414127] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:55.098 [2024-10-25 18:05:13.414132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:55.098 [2024-10-25 18:05:13.414143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.098 [2024-10-25 18:05:13.414149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:55.098 [2024-10-25 18:05:13.414156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:23:55.098 [2024-10-25 18:05:13.414162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.099 [2024-10-25 18:05:13.414227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.099 [2024-10-25 18:05:13.414234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:55.099 [2024-10-25 18:05:13.414243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:23:55.099 [2024-10-25 18:05:13.414255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.099 [2024-10-25 18:05:13.414331] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:55.099 [2024-10-25 18:05:13.414340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:55.099 [2024-10-25 18:05:13.414349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:55.099 [2024-10-25 18:05:13.414368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:55.099 [2024-10-25 18:05:13.414380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:55.099 [2024-10-25 18:05:13.414387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:55.099 [2024-10-25 18:05:13.414392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:55.099 [2024-10-25 18:05:13.414405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:55.099 [2024-10-25 18:05:13.414412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:55.099 [2024-10-25 18:05:13.414425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:55.099 [2024-10-25 18:05:13.414430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:55.099 [2024-10-25 18:05:13.414445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:55.099 [2024-10-25 18:05:13.414451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:55.099 [2024-10-25 18:05:13.414467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:55.099 [2024-10-25 18:05:13.414471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:55.099 [2024-10-25 18:05:13.414484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:55.099 [2024-10-25 18:05:13.414490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:55.099 [2024-10-25 18:05:13.414501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:55.099 [2024-10-25 18:05:13.414506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:55.099 [2024-10-25 18:05:13.414518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:55.099 [2024-10-25 18:05:13.414524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:55.099 [2024-10-25 18:05:13.414539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:55.099 [2024-10-25 18:05:13.414544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:55.099 [2024-10-25 18:05:13.414569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:55.099 [2024-10-25 18:05:13.414588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:55.099 [2024-10-25 18:05:13.414605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:55.099 [2024-10-25 18:05:13.414612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:55.099 [2024-10-25 18:05:13.414625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:55.099 [2024-10-25 18:05:13.414631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:55.099 [2024-10-25 18:05:13.414645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:55.099 [2024-10-25 18:05:13.414655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:55.099 [2024-10-25 18:05:13.414660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:55.099 [2024-10-25 18:05:13.414667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:55.099 [2024-10-25 18:05:13.414672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:55.099 [2024-10-25 18:05:13.414679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:55.099 [2024-10-25 18:05:13.414688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:55.099 [2024-10-25 18:05:13.414697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:55.099 [2024-10-25 18:05:13.414712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:55.099 [2024-10-25 18:05:13.414729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:55.099 [2024-10-25 18:05:13.414736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:55.099 [2024-10-25 18:05:13.414741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:55.099 [2024-10-25 18:05:13.414748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:55.099 [2024-10-25 18:05:13.414793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:55.099 [2024-10-25 18:05:13.414801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:55.099 [2024-10-25 18:05:13.414817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:55.099 [2024-10-25 18:05:13.414823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:55.099 [2024-10-25 18:05:13.414831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:55.100 [2024-10-25 18:05:13.414837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:55.100 [2024-10-25 18:05:13.414844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:55.100 [2024-10-25 18:05:13.414850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:23:55.100 [2024-10-25 18:05:13.414857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:55.100 [2024-10-25 18:05:13.414901] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:55.100 [2024-10-25 18:05:13.414912] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:57.628 [2024-10-25 18:05:15.621575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.621652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:57.628 [2024-10-25 18:05:15.621668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2206.662 ms 00:23:57.628 [2024-10-25 18:05:15.621678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.649803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.649858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:57.628 [2024-10-25 18:05:15.649872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.901 ms 00:23:57.628 [2024-10-25 18:05:15.649883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.649976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.649989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:57.628 [2024-10-25 18:05:15.649998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:23:57.628 [2024-10-25 18:05:15.650010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.682506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.682566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:57.628 [2024-10-25 18:05:15.682578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.459 ms 00:23:57.628 [2024-10-25 18:05:15.682588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.682626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.682638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:57.628 [2024-10-25 18:05:15.682649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:57.628 [2024-10-25 18:05:15.682660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.683100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.683125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:57.628 [2024-10-25 18:05:15.683134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:23:57.628 [2024-10-25 18:05:15.683145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.683193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.683203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:57.628 [2024-10-25 18:05:15.683212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:57.628 [2024-10-25 18:05:15.683224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.698734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.628 [2024-10-25 18:05:15.698902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:57.628 [2024-10-25 18:05:15.698918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.488 ms 00:23:57.628 [2024-10-25 18:05:15.698931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.628 [2024-10-25 18:05:15.711142] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:57.629 [2024-10-25 18:05:15.712162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.712190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:57.629 [2024-10-25 18:05:15.712202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.120 ms 00:23:57.629 [2024-10-25 18:05:15.712209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.744197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.744367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:23:57.629 [2024-10-25 18:05:15.744391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.958 ms 00:23:57.629 [2024-10-25 18:05:15.744401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.744478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.744489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:57.629 [2024-10-25 18:05:15.744501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:23:57.629 [2024-10-25 18:05:15.744512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.767451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.767593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:23:57.629 [2024-10-25 18:05:15.767613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.877 ms 00:23:57.629 [2024-10-25 18:05:15.767622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.789857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.789887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:23:57.629 [2024-10-25 18:05:15.789900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.208 ms 00:23:57.629 [2024-10-25 18:05:15.789908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.790466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.790481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:57.629 [2024-10-25 18:05:15.790492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:23:57.629 [2024-10-25 18:05:15.790499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.870003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.870069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:23:57.629 [2024-10-25 18:05:15.870090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.465 ms 00:23:57.629 [2024-10-25 18:05:15.870099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.897223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.897288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:23:57.629 [2024-10-25 18:05:15.897316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.029 ms 00:23:57.629 [2024-10-25 18:05:15.897325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.923941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.924010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:23:57.629 [2024-10-25 18:05:15.924027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.556 ms 00:23:57.629 [2024-10-25 18:05:15.924035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.949953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.950009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:57.629 [2024-10-25 18:05:15.950025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.857 ms 00:23:57.629 [2024-10-25 18:05:15.950033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.950083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.950092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:57.629 [2024-10-25 18:05:15.950108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:57.629 [2024-10-25 18:05:15.950120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.950214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:57.629 [2024-10-25 18:05:15.950228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:57.629 [2024-10-25 18:05:15.950239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:23:57.629 [2024-10-25 18:05:15.950246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:57.629 [2024-10-25 18:05:15.951309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2550.189 ms, result 0 00:23:57.629 { 00:23:57.629 "name": "ftl", 00:23:57.629 "uuid": "bb14d880-e949-4bda-a944-28dc5de98510" 00:23:57.629 } 00:23:57.629 18:05:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:23:57.888 [2024-10-25 18:05:16.166533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.888 18:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:23:58.146 18:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:23:58.146 [2024-10-25 18:05:16.559041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:58.146 18:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:23:58.405 [2024-10-25 18:05:16.764171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:58.405 18:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:58.972 Fill FTL, iteration 1 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77779 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77779 /var/tmp/spdk.tgt.sock 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77779 ']' 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:23:58.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.972 18:05:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:58.972 [2024-10-25 18:05:17.198737] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:23:58.972 [2024-10-25 18:05:17.199231] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77779 ] 00:23:58.972 [2024-10-25 18:05:17.358394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.231 [2024-10-25 18:05:17.456499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.798 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.798 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:59.798 18:05:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:00.057 ftln1 00:24:00.057 18:05:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:00.057 18:05:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77779 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77779 ']' 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77779 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77779 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:00.315 killing process with pid 77779 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77779' 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77779 00:24:00.315 18:05:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77779 00:24:01.690 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:01.690 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:01.690 [2024-10-25 18:05:20.056979] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:01.690 [2024-10-25 18:05:20.057103] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77826 ] 00:24:01.948 [2024-10-25 18:05:20.217207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.948 [2024-10-25 18:05:20.317404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.328  [2024-10-25T18:05:22.696Z] Copying: 237/1024 [MB] (237 MBps) [2024-10-25T18:05:24.067Z] Copying: 492/1024 [MB] (255 MBps) [2024-10-25T18:05:25.001Z] Copying: 755/1024 [MB] (263 MBps) [2024-10-25T18:05:25.001Z] Copying: 1024/1024 [MB] (269 MBps) [2024-10-25T18:05:25.259Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:24:06.824 00:24:06.824 Calculate MD5 checksum, iteration 1 00:24:06.824 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:06.824 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:06.824 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:06.825 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:06.825 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:06.825 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:06.825 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:07.083 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:07.083 [2024-10-25 18:05:25.321632] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:07.084 [2024-10-25 18:05:25.321763] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77879 ] 00:24:07.084 [2024-10-25 18:05:25.475448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.342 [2024-10-25 18:05:25.560374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.713  [2024-10-25T18:05:27.715Z] Copying: 651/1024 [MB] (651 MBps) [2024-10-25T18:05:27.974Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:24:09.539 00:24:09.539 18:05:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:09.539 18:05:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:12.068 Fill FTL, iteration 2 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c4d6bae3d44fe099ed98735e76b81b6b 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:12.068 18:05:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:12.068 [2024-10-25 18:05:30.166083] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:12.068 [2024-10-25 18:05:30.166201] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77939 ] 00:24:12.068 [2024-10-25 18:05:30.323240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.068 [2024-10-25 18:05:30.405598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.441  [2024-10-25T18:05:32.810Z] Copying: 261/1024 [MB] (261 MBps) [2024-10-25T18:05:33.744Z] Copying: 522/1024 [MB] (261 MBps) [2024-10-25T18:05:34.678Z] Copying: 783/1024 [MB] (261 MBps) [2024-10-25T18:05:35.245Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:24:16.810 00:24:16.811 Calculate MD5 checksum, iteration 2 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:16.811 18:05:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:17.069 [2024-10-25 18:05:35.297793] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:17.069 [2024-10-25 18:05:35.297914] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77993 ] 00:24:17.069 [2024-10-25 18:05:35.455220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.327 [2024-10-25 18:05:35.538296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.700  [2024-10-25T18:05:37.701Z] Copying: 678/1024 [MB] (678 MBps) [2024-10-25T18:05:38.633Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:24:20.198 00:24:20.198 18:05:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:24:20.198 18:05:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:22.724 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:22.724 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ddf9cd651c2208286f17941dcbb56589 00:24:22.724 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:22.724 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:22.724 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:22.724 [2024-10-25 18:05:40.827866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.724 [2024-10-25 18:05:40.828132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:22.724 [2024-10-25 18:05:40.828153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:22.724 [2024-10-25 18:05:40.828161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.724 [2024-10-25 18:05:40.828189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.724 [2024-10-25 18:05:40.828197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:22.724 [2024-10-25 18:05:40.828204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:22.724 [2024-10-25 18:05:40.828211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.724 [2024-10-25 18:05:40.828232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.724 [2024-10-25 18:05:40.828238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:22.724 [2024-10-25 18:05:40.828245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:22.724 [2024-10-25 18:05:40.828252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.724 [2024-10-25 18:05:40.828308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.433 ms, result 0 00:24:22.724 true 00:24:22.724 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:22.724 { 00:24:22.724 "name": "ftl", 00:24:22.724 "properties": [ 00:24:22.724 { 00:24:22.724 "name": "superblock_version", 00:24:22.724 "value": 5, 00:24:22.724 "read-only": true 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "name": "base_device", 00:24:22.724 "bands": [ 00:24:22.724 { 00:24:22.724 "id": 0, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 1, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 2, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 3, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 4, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 5, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 6, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 7, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 8, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 9, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 10, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 11, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 12, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 13, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 14, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 15, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 16, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "id": 17, 00:24:22.724 "state": "FREE", 00:24:22.724 "validity": 0.0 00:24:22.724 } 00:24:22.724 ], 00:24:22.724 "read-only": true 00:24:22.724 }, 00:24:22.724 { 00:24:22.724 "name": "cache_device", 00:24:22.724 "type": "bdev", 00:24:22.725 "chunks": [ 00:24:22.725 { 00:24:22.725 "id": 0, 00:24:22.725 "state": "INACTIVE", 00:24:22.725 "utilization": 0.0 00:24:22.725 }, 00:24:22.725 { 00:24:22.725 "id": 1, 00:24:22.725 "state": "CLOSED", 00:24:22.725 "utilization": 1.0 00:24:22.725 }, 00:24:22.725 { 00:24:22.725 "id": 2, 00:24:22.725 "state": "CLOSED", 00:24:22.725 "utilization": 1.0 00:24:22.725 }, 00:24:22.725 { 00:24:22.725 "id": 3, 00:24:22.725 "state": "OPEN", 00:24:22.725 "utilization": 0.001953125 00:24:22.725 }, 00:24:22.725 { 00:24:22.725 "id": 4, 00:24:22.725 "state": "OPEN", 00:24:22.725 "utilization": 0.0 00:24:22.725 } 00:24:22.725 ], 00:24:22.725 "read-only": true 00:24:22.725 }, 00:24:22.725 { 00:24:22.725 "name": "verbose_mode", 00:24:22.725 "value": true, 00:24:22.725 "unit": "", 00:24:22.725 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:22.725 }, 00:24:22.725 { 00:24:22.725 "name": "prep_upgrade_on_shutdown", 00:24:22.725 "value": false, 00:24:22.725 "unit": "", 00:24:22.725 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:22.725 } 00:24:22.725 ] 00:24:22.725 } 00:24:22.725 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:24:22.982 [2024-10-25 18:05:41.236150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.982 [2024-10-25 18:05:41.236211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:22.982 [2024-10-25 18:05:41.236224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:22.982 [2024-10-25 18:05:41.236230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.982 [2024-10-25 18:05:41.236250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.982 [2024-10-25 18:05:41.236256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:22.982 [2024-10-25 18:05:41.236263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:22.982 [2024-10-25 18:05:41.236269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.982 [2024-10-25 18:05:41.236285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:22.982 [2024-10-25 18:05:41.236291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:22.982 [2024-10-25 18:05:41.236297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:22.982 [2024-10-25 18:05:41.236303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:22.982 [2024-10-25 18:05:41.236352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.197 ms, result 0 00:24:22.982 true 00:24:22.982 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:24:22.982 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:22.982 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:23.240 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:24:23.240 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:24:23.240 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:23.240 [2024-10-25 18:05:41.648521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.240 [2024-10-25 18:05:41.648747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:23.240 [2024-10-25 18:05:41.648800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:23.240 [2024-10-25 18:05:41.648819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.240 [2024-10-25 18:05:41.648855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.240 [2024-10-25 18:05:41.648871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:23.240 [2024-10-25 18:05:41.648951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:23.240 [2024-10-25 18:05:41.648969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.240 [2024-10-25 18:05:41.648994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:23.240 [2024-10-25 18:05:41.649011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:23.240 [2024-10-25 18:05:41.649044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:23.240 [2024-10-25 18:05:41.649061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:23.241 [2024-10-25 18:05:41.649126] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.593 ms, result 0 00:24:23.241 true 00:24:23.241 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:23.498 { 00:24:23.498 "name": "ftl", 00:24:23.498 "properties": [ 00:24:23.498 { 00:24:23.498 "name": "superblock_version", 00:24:23.498 "value": 5, 00:24:23.498 "read-only": true 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "name": "base_device", 00:24:23.498 "bands": [ 00:24:23.498 { 00:24:23.498 "id": 0, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 1, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 2, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 3, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 4, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 5, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 6, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 7, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 8, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 9, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 10, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.498 "id": 11, 00:24:23.498 "state": "FREE", 00:24:23.498 "validity": 0.0 00:24:23.498 }, 00:24:23.498 { 00:24:23.499 "id": 12, 00:24:23.499 "state": "FREE", 00:24:23.499 "validity": 0.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 13, 00:24:23.499 "state": "FREE", 00:24:23.499 "validity": 0.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 14, 00:24:23.499 "state": "FREE", 00:24:23.499 "validity": 0.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 15, 00:24:23.499 "state": "FREE", 00:24:23.499 "validity": 0.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 16, 00:24:23.499 "state": "FREE", 00:24:23.499 "validity": 0.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 17, 00:24:23.499 "state": "FREE", 00:24:23.499 "validity": 0.0 00:24:23.499 } 00:24:23.499 ], 00:24:23.499 "read-only": true 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "name": "cache_device", 00:24:23.499 "type": "bdev", 00:24:23.499 "chunks": [ 00:24:23.499 { 00:24:23.499 "id": 0, 00:24:23.499 "state": "INACTIVE", 00:24:23.499 "utilization": 0.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 1, 00:24:23.499 "state": "CLOSED", 00:24:23.499 "utilization": 1.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 2, 00:24:23.499 "state": "CLOSED", 00:24:23.499 "utilization": 1.0 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 3, 00:24:23.499 "state": "OPEN", 00:24:23.499 "utilization": 0.001953125 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "id": 4, 00:24:23.499 "state": "OPEN", 00:24:23.499 "utilization": 0.0 00:24:23.499 } 00:24:23.499 ], 00:24:23.499 "read-only": true 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "name": "verbose_mode", 00:24:23.499 "value": true, 00:24:23.499 "unit": "", 00:24:23.499 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:23.499 }, 00:24:23.499 { 00:24:23.499 "name": "prep_upgrade_on_shutdown", 00:24:23.499 "value": true, 00:24:23.499 "unit": "", 00:24:23.499 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:23.499 } 00:24:23.499 ] 00:24:23.499 } 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77668 ]] 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77668 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77668 ']' 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77668 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77668 00:24:23.499 killing process with pid 77668 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77668' 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77668 00:24:23.499 18:05:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77668 00:24:24.065 [2024-10-25 18:05:42.483755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:24.065 [2024-10-25 18:05:42.494917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:24.065 [2024-10-25 18:05:42.494959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:24.065 [2024-10-25 18:05:42.494971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:24.065 [2024-10-25 18:05:42.494979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:24.065 [2024-10-25 18:05:42.494998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:24.065 [2024-10-25 18:05:42.497116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:24.065 [2024-10-25 18:05:42.497140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:24.065 [2024-10-25 18:05:42.497150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.106 ms 00:24:24.065 [2024-10-25 18:05:42.497156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.474052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.474129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:32.174 [2024-10-25 18:05:49.474143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6976.850 ms 00:24:32.174 [2024-10-25 18:05:49.474150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.475398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.475426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:32.174 [2024-10-25 18:05:49.475434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.235 ms 00:24:32.174 [2024-10-25 18:05:49.475440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.476304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.476379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:32.174 [2024-10-25 18:05:49.476387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.840 ms 00:24:32.174 [2024-10-25 18:05:49.476395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.484104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.484293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:32.174 [2024-10-25 18:05:49.484307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.680 ms 00:24:32.174 [2024-10-25 18:05:49.484314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.489439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.489533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:32.174 [2024-10-25 18:05:49.489596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.098 ms 00:24:32.174 [2024-10-25 18:05:49.489617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.490010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.490365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:32.174 [2024-10-25 18:05:49.490602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.128 ms 00:24:32.174 [2024-10-25 18:05:49.490808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.506854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.506961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:24:32.174 [2024-10-25 18:05:49.507017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.857 ms 00:24:32.174 [2024-10-25 18:05:49.507039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.515975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.516074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:24:32.174 [2024-10-25 18:05:49.516122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.689 ms 00:24:32.174 [2024-10-25 18:05:49.516143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.525173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.525273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:32.174 [2024-10-25 18:05:49.525332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.892 ms 00:24:32.174 [2024-10-25 18:05:49.525355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.534775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.534877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:32.174 [2024-10-25 18:05:49.534946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.273 ms 00:24:32.174 [2024-10-25 18:05:49.534967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.535029] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:32.174 [2024-10-25 18:05:49.535065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:32.174 [2024-10-25 18:05:49.535118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:32.174 [2024-10-25 18:05:49.535157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:32.174 [2024-10-25 18:05:49.535223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.535975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.536037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:32.174 [2024-10-25 18:05:49.536072] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:32.174 [2024-10-25 18:05:49.536092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bb14d880-e949-4bda-a944-28dc5de98510 00:24:32.174 [2024-10-25 18:05:49.536122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:32.174 [2024-10-25 18:05:49.536163] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:24:32.174 [2024-10-25 18:05:49.536182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:24:32.174 [2024-10-25 18:05:49.536203] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:24:32.174 [2024-10-25 18:05:49.536221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:32.174 [2024-10-25 18:05:49.536240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:32.174 [2024-10-25 18:05:49.536260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:32.174 [2024-10-25 18:05:49.536278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:32.174 [2024-10-25 18:05:49.536295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:32.174 [2024-10-25 18:05:49.536313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.536334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:32.174 [2024-10-25 18:05:49.536362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.285 ms 00:24:32.174 [2024-10-25 18:05:49.536397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.174 [2024-10-25 18:05:49.549395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.174 [2024-10-25 18:05:49.549505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:32.174 [2024-10-25 18:05:49.549600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.969 ms 00:24:32.174 [2024-10-25 18:05:49.549625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.550095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.175 [2024-10-25 18:05:49.550177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:32.175 [2024-10-25 18:05:49.550224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:24:32.175 [2024-10-25 18:05:49.550246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.593553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.593717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:32.175 [2024-10-25 18:05:49.593776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.593787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.593828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.593841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:32.175 [2024-10-25 18:05:49.593849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.593857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.593937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.593949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:32.175 [2024-10-25 18:05:49.593956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.593964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.593982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.593990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:32.175 [2024-10-25 18:05:49.594002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.594009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.674574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.674635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:32.175 [2024-10-25 18:05:49.674648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.674657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:32.175 [2024-10-25 18:05:49.740269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:32.175 [2024-10-25 18:05:49.740400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:32.175 [2024-10-25 18:05:49.740470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:32.175 [2024-10-25 18:05:49.740621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:32.175 [2024-10-25 18:05:49.740680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:32.175 [2024-10-25 18:05:49.740746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:32.175 [2024-10-25 18:05:49.740811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:32.175 [2024-10-25 18:05:49.740819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:32.175 [2024-10-25 18:05:49.740830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.175 [2024-10-25 18:05:49.740956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7245.989 ms, result 0 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:37.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78174 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78174 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78174 ']' 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.457 18:05:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:37.457 [2024-10-25 18:05:55.768676] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:37.457 [2024-10-25 18:05:55.768802] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78174 ] 00:24:37.797 [2024-10-25 18:05:55.935858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.797 [2024-10-25 18:05:56.051672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.389 [2024-10-25 18:05:56.788523] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:38.389 [2024-10-25 18:05:56.788613] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:38.648 [2024-10-25 18:05:56.933250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.933304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:38.648 [2024-10-25 18:05:56.933318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:38.648 [2024-10-25 18:05:56.933327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.933379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.933389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:38.648 [2024-10-25 18:05:56.933398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:24:38.648 [2024-10-25 18:05:56.933406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.933431] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:38.648 [2024-10-25 18:05:56.934150] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:38.648 [2024-10-25 18:05:56.934174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.934182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:38.648 [2024-10-25 18:05:56.934192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.751 ms 00:24:38.648 [2024-10-25 18:05:56.934199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.935576] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:38.648 [2024-10-25 18:05:56.948515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.948549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:38.648 [2024-10-25 18:05:56.948581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.940 ms 00:24:38.648 [2024-10-25 18:05:56.948594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.948859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.948894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:38.648 [2024-10-25 18:05:56.948907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:24:38.648 [2024-10-25 18:05:56.948915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.955484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.955518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:38.648 [2024-10-25 18:05:56.955528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.485 ms 00:24:38.648 [2024-10-25 18:05:56.955537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.955628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.648 [2024-10-25 18:05:56.955640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:38.648 [2024-10-25 18:05:56.955650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:24:38.648 [2024-10-25 18:05:56.955658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.648 [2024-10-25 18:05:56.955705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.649 [2024-10-25 18:05:56.955716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:38.649 [2024-10-25 18:05:56.955725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:24:38.649 [2024-10-25 18:05:56.955736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.649 [2024-10-25 18:05:56.955766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:38.649 [2024-10-25 18:05:56.959455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.649 [2024-10-25 18:05:56.959482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:38.649 [2024-10-25 18:05:56.959493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.697 ms 00:24:38.649 [2024-10-25 18:05:56.959500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.649 [2024-10-25 18:05:56.959530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.649 [2024-10-25 18:05:56.959538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:38.649 [2024-10-25 18:05:56.959547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:38.649 [2024-10-25 18:05:56.959564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.649 [2024-10-25 18:05:56.959603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:38.649 [2024-10-25 18:05:56.959625] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:38.649 [2024-10-25 18:05:56.959667] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:38.649 [2024-10-25 18:05:56.959683] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:24:38.649 [2024-10-25 18:05:56.959789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:38.649 [2024-10-25 18:05:56.959808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:38.649 [2024-10-25 18:05:56.959819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:24:38.649 [2024-10-25 18:05:56.959829] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:38.649 [2024-10-25 18:05:56.959838] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:38.649 [2024-10-25 18:05:56.959846] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:38.649 [2024-10-25 18:05:56.959856] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:38.649 [2024-10-25 18:05:56.959864] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:38.649 [2024-10-25 18:05:56.959871] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:38.649 [2024-10-25 18:05:56.959879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.649 [2024-10-25 18:05:56.959887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:38.649 [2024-10-25 18:05:56.959895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:24:38.649 [2024-10-25 18:05:56.959903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.649 [2024-10-25 18:05:56.960000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.649 [2024-10-25 18:05:56.960009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:38.649 [2024-10-25 18:05:56.960017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:24:38.649 [2024-10-25 18:05:56.960027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.649 [2024-10-25 18:05:56.960132] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:38.649 [2024-10-25 18:05:56.960142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:38.649 [2024-10-25 18:05:56.960151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:38.649 [2024-10-25 18:05:56.960174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:38.649 [2024-10-25 18:05:56.960187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:38.649 [2024-10-25 18:05:56.960194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:38.649 [2024-10-25 18:05:56.960200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:38.649 [2024-10-25 18:05:56.960216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:38.649 [2024-10-25 18:05:56.960223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:38.649 [2024-10-25 18:05:56.960236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:38.649 [2024-10-25 18:05:56.960243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:38.649 [2024-10-25 18:05:56.960256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:38.649 [2024-10-25 18:05:56.960263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:38.649 [2024-10-25 18:05:56.960277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:38.649 [2024-10-25 18:05:56.960284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:38.649 [2024-10-25 18:05:56.960298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:38.649 [2024-10-25 18:05:56.960305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:38.649 [2024-10-25 18:05:56.960324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:38.649 [2024-10-25 18:05:56.960330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:38.649 [2024-10-25 18:05:56.960343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:38.649 [2024-10-25 18:05:56.960350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:38.649 [2024-10-25 18:05:56.960362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:38.649 [2024-10-25 18:05:56.960368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:38.649 [2024-10-25 18:05:56.960381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:38.649 [2024-10-25 18:05:56.960401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:38.649 [2024-10-25 18:05:56.960420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:38.649 [2024-10-25 18:05:56.960429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:38.649 [2024-10-25 18:05:56.960443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:38.649 [2024-10-25 18:05:56.960450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.649 [2024-10-25 18:05:56.960465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:38.649 [2024-10-25 18:05:56.960472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:38.649 [2024-10-25 18:05:56.960479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:38.649 [2024-10-25 18:05:56.960486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:38.649 [2024-10-25 18:05:56.960493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:38.649 [2024-10-25 18:05:56.960500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:38.649 [2024-10-25 18:05:56.960509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:38.649 [2024-10-25 18:05:56.960520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:38.649 [2024-10-25 18:05:56.960536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:38.649 [2024-10-25 18:05:56.960568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:38.649 [2024-10-25 18:05:56.960576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:38.649 [2024-10-25 18:05:56.960583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:38.649 [2024-10-25 18:05:56.960590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:38.649 [2024-10-25 18:05:56.960640] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:38.649 [2024-10-25 18:05:56.960648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:38.649 [2024-10-25 18:05:56.960656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:38.650 [2024-10-25 18:05:56.960663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:38.650 [2024-10-25 18:05:56.960670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:38.650 [2024-10-25 18:05:56.960678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:38.650 [2024-10-25 18:05:56.960686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.650 [2024-10-25 18:05:56.960693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:38.650 [2024-10-25 18:05:56.960701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:24:38.650 [2024-10-25 18:05:56.960708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.650 [2024-10-25 18:05:56.960751] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:38.650 [2024-10-25 18:05:56.960762] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:41.179 [2024-10-25 18:05:59.257307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.257370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:41.179 [2024-10-25 18:05:59.257385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2296.545 ms 00:24:41.179 [2024-10-25 18:05:59.257394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.285318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.285358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:41.179 [2024-10-25 18:05:59.285371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.693 ms 00:24:41.179 [2024-10-25 18:05:59.285380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.285454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.285464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:41.179 [2024-10-25 18:05:59.285478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:24:41.179 [2024-10-25 18:05:59.285486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.318342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.318373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:41.179 [2024-10-25 18:05:59.318383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.818 ms 00:24:41.179 [2024-10-25 18:05:59.318392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.318422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.318430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:41.179 [2024-10-25 18:05:59.318439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:41.179 [2024-10-25 18:05:59.318446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.318884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.318902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:41.179 [2024-10-25 18:05:59.318911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:24:41.179 [2024-10-25 18:05:59.318919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.318963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.318973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:41.179 [2024-10-25 18:05:59.318982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:24:41.179 [2024-10-25 18:05:59.318989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.334530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.334576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:41.179 [2024-10-25 18:05:59.334587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.520 ms 00:24:41.179 [2024-10-25 18:05:59.334594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.347347] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:41.179 [2024-10-25 18:05:59.347377] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:41.179 [2024-10-25 18:05:59.347389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.347397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:24:41.179 [2024-10-25 18:05:59.347406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.686 ms 00:24:41.179 [2024-10-25 18:05:59.347413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.360829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.360864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:24:41.179 [2024-10-25 18:05:59.360874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.378 ms 00:24:41.179 [2024-10-25 18:05:59.360882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.372035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.372060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:24:41.179 [2024-10-25 18:05:59.372070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.115 ms 00:24:41.179 [2024-10-25 18:05:59.372077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.383154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.383178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:24:41.179 [2024-10-25 18:05:59.383188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.044 ms 00:24:41.179 [2024-10-25 18:05:59.383195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.383810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.179 [2024-10-25 18:05:59.383833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:41.179 [2024-10-25 18:05:59.383846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:24:41.179 [2024-10-25 18:05:59.383855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.179 [2024-10-25 18:05:59.450962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.451002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:41.180 [2024-10-25 18:05:59.451015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 67.088 ms 00:24:41.180 [2024-10-25 18:05:59.451024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.461985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:41.180 [2024-10-25 18:05:59.462745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.462769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:41.180 [2024-10-25 18:05:59.462779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.681 ms 00:24:41.180 [2024-10-25 18:05:59.462787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.462867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.462878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:24:41.180 [2024-10-25 18:05:59.462889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:24:41.180 [2024-10-25 18:05:59.462897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.462957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.462975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:41.180 [2024-10-25 18:05:59.462983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:24:41.180 [2024-10-25 18:05:59.462991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.463013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.463021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:41.180 [2024-10-25 18:05:59.463029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:41.180 [2024-10-25 18:05:59.463039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.463073] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:41.180 [2024-10-25 18:05:59.463083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.463091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:41.180 [2024-10-25 18:05:59.463100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:41.180 [2024-10-25 18:05:59.463107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.485684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.485715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:41.180 [2024-10-25 18:05:59.485731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.558 ms 00:24:41.180 [2024-10-25 18:05:59.485753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.485824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.180 [2024-10-25 18:05:59.485834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:41.180 [2024-10-25 18:05:59.485843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:24:41.180 [2024-10-25 18:05:59.485850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.180 [2024-10-25 18:05:59.486852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2553.131 ms, result 0 00:24:41.180 [2024-10-25 18:05:59.502057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.180 [2024-10-25 18:05:59.518040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:41.180 [2024-10-25 18:05:59.526194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:41.746 18:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.746 18:06:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:41.746 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:41.746 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:41.746 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:42.003 [2024-10-25 18:06:00.194723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:42.003 [2024-10-25 18:06:00.194774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:42.003 [2024-10-25 18:06:00.194789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:24:42.003 [2024-10-25 18:06:00.194798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:42.003 [2024-10-25 18:06:00.194824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:42.003 [2024-10-25 18:06:00.194834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:42.003 [2024-10-25 18:06:00.194842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:42.003 [2024-10-25 18:06:00.194850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:42.003 [2024-10-25 18:06:00.194870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:42.003 [2024-10-25 18:06:00.194879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:42.003 [2024-10-25 18:06:00.194887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:42.003 [2024-10-25 18:06:00.194895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:42.003 [2024-10-25 18:06:00.194955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.222 ms, result 0 00:24:42.003 true 00:24:42.003 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:42.003 { 00:24:42.003 "name": "ftl", 00:24:42.003 "properties": [ 00:24:42.003 { 00:24:42.003 "name": "superblock_version", 00:24:42.003 "value": 5, 00:24:42.003 "read-only": true 00:24:42.003 }, 00:24:42.003 { 00:24:42.003 "name": "base_device", 00:24:42.003 "bands": [ 00:24:42.003 { 00:24:42.003 "id": 0, 00:24:42.003 "state": "CLOSED", 00:24:42.003 "validity": 1.0 00:24:42.003 }, 00:24:42.003 { 00:24:42.003 "id": 1, 00:24:42.003 "state": "CLOSED", 00:24:42.003 "validity": 1.0 00:24:42.003 }, 00:24:42.003 { 00:24:42.003 "id": 2, 00:24:42.003 "state": "CLOSED", 00:24:42.003 "validity": 0.007843137254901933 00:24:42.003 }, 00:24:42.003 { 00:24:42.003 "id": 3, 00:24:42.003 "state": "FREE", 00:24:42.003 "validity": 0.0 00:24:42.003 }, 00:24:42.003 { 00:24:42.003 "id": 4, 00:24:42.003 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 5, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 6, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 7, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 8, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 9, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 10, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 11, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 12, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 13, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 14, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 15, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 16, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 17, 00:24:42.004 "state": "FREE", 00:24:42.004 "validity": 0.0 00:24:42.004 } 00:24:42.004 ], 00:24:42.004 "read-only": true 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "name": "cache_device", 00:24:42.004 "type": "bdev", 00:24:42.004 "chunks": [ 00:24:42.004 { 00:24:42.004 "id": 0, 00:24:42.004 "state": "INACTIVE", 00:24:42.004 "utilization": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 1, 00:24:42.004 "state": "OPEN", 00:24:42.004 "utilization": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 2, 00:24:42.004 "state": "OPEN", 00:24:42.004 "utilization": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 3, 00:24:42.004 "state": "FREE", 00:24:42.004 "utilization": 0.0 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "id": 4, 00:24:42.004 "state": "FREE", 00:24:42.004 "utilization": 0.0 00:24:42.004 } 00:24:42.004 ], 00:24:42.004 "read-only": true 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "name": "verbose_mode", 00:24:42.004 "value": true, 00:24:42.004 "unit": "", 00:24:42.004 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:42.004 }, 00:24:42.004 { 00:24:42.004 "name": "prep_upgrade_on_shutdown", 00:24:42.004 "value": false, 00:24:42.004 "unit": "", 00:24:42.004 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:42.004 } 00:24:42.004 ] 00:24:42.004 } 00:24:42.004 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:42.004 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:24:42.004 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:42.262 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:24:42.262 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:24:42.262 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:24:42.262 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:42.262 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:42.519 Validate MD5 checksum, iteration 1 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:42.519 18:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:42.519 [2024-10-25 18:06:00.913006] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:42.519 [2024-10-25 18:06:00.913125] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78248 ] 00:24:42.775 [2024-10-25 18:06:01.073213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.775 [2024-10-25 18:06:01.169465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.668  [2024-10-25T18:06:03.360Z] Copying: 693/1024 [MB] (693 MBps) [2024-10-25T18:06:04.293Z] Copying: 1024/1024 [MB] (average 667 MBps) 00:24:45.858 00:24:45.858 18:06:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:45.858 18:06:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:48.387 Validate MD5 checksum, iteration 2 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c4d6bae3d44fe099ed98735e76b81b6b 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c4d6bae3d44fe099ed98735e76b81b6b != \c\4\d\6\b\a\e\3\d\4\4\f\e\0\9\9\e\d\9\8\7\3\5\e\7\6\b\8\1\b\6\b ]] 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:48.387 18:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:48.387 [2024-10-25 18:06:06.270721] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:48.387 [2024-10-25 18:06:06.270847] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78310 ] 00:24:48.387 [2024-10-25 18:06:06.432683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.387 [2024-10-25 18:06:06.531416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.763  [2024-10-25T18:06:08.763Z] Copying: 669/1024 [MB] (669 MBps) [2024-10-25T18:06:09.329Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:24:50.894 00:24:50.894 18:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:50.894 18:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ddf9cd651c2208286f17941dcbb56589 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ddf9cd651c2208286f17941dcbb56589 != \d\d\f\9\c\d\6\5\1\c\2\2\0\8\2\8\6\f\1\7\9\4\1\d\c\b\b\5\6\5\8\9 ]] 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78174 ]] 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78174 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78371 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78371 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78371 ']' 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:53.423 18:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.423 [2024-10-25 18:06:11.475274] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:53.423 [2024-10-25 18:06:11.475403] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78371 ] 00:24:53.423 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78174 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:24:53.423 [2024-10-25 18:06:11.634923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.423 [2024-10-25 18:06:11.744188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.358 [2024-10-25 18:06:12.476799] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:54.358 [2024-10-25 18:06:12.476862] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:54.358 [2024-10-25 18:06:12.621366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.621409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:54.358 [2024-10-25 18:06:12.621423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:54.358 [2024-10-25 18:06:12.621431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.621479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.621489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:54.358 [2024-10-25 18:06:12.621497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:24:54.358 [2024-10-25 18:06:12.621505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.621527] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:54.358 [2024-10-25 18:06:12.622244] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:54.358 [2024-10-25 18:06:12.622267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.622275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:54.358 [2024-10-25 18:06:12.622285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.748 ms 00:24:54.358 [2024-10-25 18:06:12.622293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.622598] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:54.358 [2024-10-25 18:06:12.638929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.638960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:54.358 [2024-10-25 18:06:12.638973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.331 ms 00:24:54.358 [2024-10-25 18:06:12.638981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.648025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.648051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:54.358 [2024-10-25 18:06:12.648063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:24:54.358 [2024-10-25 18:06:12.648071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.648381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.648397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:54.358 [2024-10-25 18:06:12.648406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:24:54.358 [2024-10-25 18:06:12.648413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.648461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.648472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:54.358 [2024-10-25 18:06:12.648480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:24:54.358 [2024-10-25 18:06:12.648487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.648510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.648518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:54.358 [2024-10-25 18:06:12.648526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:54.358 [2024-10-25 18:06:12.648533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.648578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:54.358 [2024-10-25 18:06:12.651536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.651570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:54.358 [2024-10-25 18:06:12.651579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.987 ms 00:24:54.358 [2024-10-25 18:06:12.651586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.651613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.651625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:54.358 [2024-10-25 18:06:12.651632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:54.358 [2024-10-25 18:06:12.651639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.651658] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:54.358 [2024-10-25 18:06:12.651676] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:54.358 [2024-10-25 18:06:12.651711] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:54.358 [2024-10-25 18:06:12.651726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:24:54.358 [2024-10-25 18:06:12.651836] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:54.358 [2024-10-25 18:06:12.651846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:54.358 [2024-10-25 18:06:12.651856] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:24:54.358 [2024-10-25 18:06:12.651867] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:54.358 [2024-10-25 18:06:12.651875] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:54.358 [2024-10-25 18:06:12.651884] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:54.358 [2024-10-25 18:06:12.651891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:54.358 [2024-10-25 18:06:12.651898] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:54.358 [2024-10-25 18:06:12.651904] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:54.358 [2024-10-25 18:06:12.651912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.651919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:54.358 [2024-10-25 18:06:12.651929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:24:54.358 [2024-10-25 18:06:12.651936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.358 [2024-10-25 18:06:12.652027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.358 [2024-10-25 18:06:12.652035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:54.358 [2024-10-25 18:06:12.652042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:24:54.359 [2024-10-25 18:06:12.652049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.359 [2024-10-25 18:06:12.652166] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:54.359 [2024-10-25 18:06:12.652176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:54.359 [2024-10-25 18:06:12.652184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:54.359 [2024-10-25 18:06:12.652209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:54.359 [2024-10-25 18:06:12.652223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:54.359 [2024-10-25 18:06:12.652231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:54.359 [2024-10-25 18:06:12.652237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:54.359 [2024-10-25 18:06:12.652251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:54.359 [2024-10-25 18:06:12.652257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:54.359 [2024-10-25 18:06:12.652270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:54.359 [2024-10-25 18:06:12.652276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:54.359 [2024-10-25 18:06:12.652289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:54.359 [2024-10-25 18:06:12.652295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:54.359 [2024-10-25 18:06:12.652308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:54.359 [2024-10-25 18:06:12.652314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:54.359 [2024-10-25 18:06:12.652333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:54.359 [2024-10-25 18:06:12.652339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:54.359 [2024-10-25 18:06:12.652351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:54.359 [2024-10-25 18:06:12.652357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:54.359 [2024-10-25 18:06:12.652369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:54.359 [2024-10-25 18:06:12.652376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:54.359 [2024-10-25 18:06:12.652388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:54.359 [2024-10-25 18:06:12.652395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:54.359 [2024-10-25 18:06:12.652408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:54.359 [2024-10-25 18:06:12.652428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:54.359 [2024-10-25 18:06:12.652448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:54.359 [2024-10-25 18:06:12.652454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:54.359 [2024-10-25 18:06:12.652469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:54.359 [2024-10-25 18:06:12.652476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:54.359 [2024-10-25 18:06:12.652491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:54.359 [2024-10-25 18:06:12.652499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:54.359 [2024-10-25 18:06:12.652506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:54.359 [2024-10-25 18:06:12.652512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:54.359 [2024-10-25 18:06:12.652519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:54.359 [2024-10-25 18:06:12.652525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:54.359 [2024-10-25 18:06:12.652533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:54.359 [2024-10-25 18:06:12.652542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:54.359 [2024-10-25 18:06:12.652575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:54.359 [2024-10-25 18:06:12.652597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:54.359 [2024-10-25 18:06:12.652604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:54.359 [2024-10-25 18:06:12.652611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:54.359 [2024-10-25 18:06:12.652618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:54.359 [2024-10-25 18:06:12.652663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:54.359 [2024-10-25 18:06:12.652670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:54.359 [2024-10-25 18:06:12.652678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.360 [2024-10-25 18:06:12.652687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.360 [2024-10-25 18:06:12.652695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:54.360 [2024-10-25 18:06:12.652703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:54.360 [2024-10-25 18:06:12.652710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:54.360 [2024-10-25 18:06:12.652718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.652727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:54.360 [2024-10-25 18:06:12.652735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.624 ms 00:24:54.360 [2024-10-25 18:06:12.652742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.678318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.678349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:54.360 [2024-10-25 18:06:12.678359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.526 ms 00:24:54.360 [2024-10-25 18:06:12.678367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.678404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.678413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:54.360 [2024-10-25 18:06:12.678421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:54.360 [2024-10-25 18:06:12.678429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.710455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.710484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:54.360 [2024-10-25 18:06:12.710494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.973 ms 00:24:54.360 [2024-10-25 18:06:12.710502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.710530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.710539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:54.360 [2024-10-25 18:06:12.710546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:54.360 [2024-10-25 18:06:12.710562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.710654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.710664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:54.360 [2024-10-25 18:06:12.710672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:24:54.360 [2024-10-25 18:06:12.710679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.710719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.710727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:54.360 [2024-10-25 18:06:12.710735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:24:54.360 [2024-10-25 18:06:12.710742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.726122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.726148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:54.360 [2024-10-25 18:06:12.726158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.359 ms 00:24:54.360 [2024-10-25 18:06:12.726165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.726279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.726290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:24:54.360 [2024-10-25 18:06:12.726299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:54.360 [2024-10-25 18:06:12.726307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.753814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.753850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:24:54.360 [2024-10-25 18:06:12.753863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.488 ms 00:24:54.360 [2024-10-25 18:06:12.753871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.360 [2024-10-25 18:06:12.763387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.360 [2024-10-25 18:06:12.763429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:54.360 [2024-10-25 18:06:12.763439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:24:54.360 [2024-10-25 18:06:12.763454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.618 [2024-10-25 18:06:12.821324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.618 [2024-10-25 18:06:12.821369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:54.618 [2024-10-25 18:06:12.821388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.817 ms 00:24:54.618 [2024-10-25 18:06:12.821397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.618 [2024-10-25 18:06:12.821577] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:24:54.618 [2024-10-25 18:06:12.821689] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:24:54.618 [2024-10-25 18:06:12.821807] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:24:54.618 [2024-10-25 18:06:12.821908] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:24:54.618 [2024-10-25 18:06:12.821917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.618 [2024-10-25 18:06:12.821925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:24:54.618 [2024-10-25 18:06:12.821935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:24:54.618 [2024-10-25 18:06:12.821943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.618 [2024-10-25 18:06:12.822010] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:24:54.618 [2024-10-25 18:06:12.822023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.618 [2024-10-25 18:06:12.822031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:24:54.618 [2024-10-25 18:06:12.822043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:54.618 [2024-10-25 18:06:12.822051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.618 [2024-10-25 18:06:12.836919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.618 [2024-10-25 18:06:12.836949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:24:54.618 [2024-10-25 18:06:12.836964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.845 ms 00:24:54.618 [2024-10-25 18:06:12.836972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.618 [2024-10-25 18:06:12.845395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.618 [2024-10-25 18:06:12.845420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:24:54.618 [2024-10-25 18:06:12.845431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:54.618 [2024-10-25 18:06:12.845438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:54.618 [2024-10-25 18:06:12.845523] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:24:54.618 [2024-10-25 18:06:12.845699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:54.618 [2024-10-25 18:06:12.845713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:24:54.618 [2024-10-25 18:06:12.845722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:24:54.618 [2024-10-25 18:06:12.845729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.184 [2024-10-25 18:06:13.315107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.184 [2024-10-25 18:06:13.315170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:24:55.184 [2024-10-25 18:06:13.315186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 468.552 ms 00:24:55.184 [2024-10-25 18:06:13.315195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.184 [2024-10-25 18:06:13.319448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.184 [2024-10-25 18:06:13.319479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:24:55.184 [2024-10-25 18:06:13.319490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.223 ms 00:24:55.184 [2024-10-25 18:06:13.319498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.184 [2024-10-25 18:06:13.320030] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:24:55.184 [2024-10-25 18:06:13.320065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.184 [2024-10-25 18:06:13.320074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:24:55.184 [2024-10-25 18:06:13.320084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:24:55.184 [2024-10-25 18:06:13.320092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.184 [2024-10-25 18:06:13.320122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.184 [2024-10-25 18:06:13.320132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:24:55.184 [2024-10-25 18:06:13.320141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:55.184 [2024-10-25 18:06:13.320149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.184 [2024-10-25 18:06:13.320185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 474.661 ms, result 0 00:24:55.184 [2024-10-25 18:06:13.320224] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:24:55.184 [2024-10-25 18:06:13.320402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.184 [2024-10-25 18:06:13.320412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:24:55.184 [2024-10-25 18:06:13.320420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:24:55.184 [2024-10-25 18:06:13.320428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.442 [2024-10-25 18:06:13.871800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.442 [2024-10-25 18:06:13.871836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:24:55.442 [2024-10-25 18:06:13.871847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 550.447 ms 00:24:55.442 [2024-10-25 18:06:13.871855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.442 [2024-10-25 18:06:13.876139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.442 [2024-10-25 18:06:13.876166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:24:55.442 [2024-10-25 18:06:13.876176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.168 ms 00:24:55.442 [2024-10-25 18:06:13.876182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.700 [2024-10-25 18:06:13.877003] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:24:55.700 [2024-10-25 18:06:13.877043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.700 [2024-10-25 18:06:13.877052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:24:55.700 [2024-10-25 18:06:13.877062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:24:55.700 [2024-10-25 18:06:13.877070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.700 [2024-10-25 18:06:13.877108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.877118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:24:55.701 [2024-10-25 18:06:13.877126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:55.701 [2024-10-25 18:06:13.877133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.877169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 556.939 ms, result 0 00:24:55.701 [2024-10-25 18:06:13.877211] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:55.701 [2024-10-25 18:06:13.877222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:55.701 [2024-10-25 18:06:13.877232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.877240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:24:55.701 [2024-10-25 18:06:13.877249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1031.728 ms 00:24:55.701 [2024-10-25 18:06:13.877257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.877285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.877294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:24:55.701 [2024-10-25 18:06:13.877306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:55.701 [2024-10-25 18:06:13.877313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.888825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:55.701 [2024-10-25 18:06:13.888938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.888949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:55.701 [2024-10-25 18:06:13.888958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.609 ms 00:24:55.701 [2024-10-25 18:06:13.888965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.889662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.889680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:24:55.701 [2024-10-25 18:06:13.889689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.629 ms 00:24:55.701 [2024-10-25 18:06:13.889699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.891986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.892006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:24:55.701 [2024-10-25 18:06:13.892016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.271 ms 00:24:55.701 [2024-10-25 18:06:13.892024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.892061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.892069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:24:55.701 [2024-10-25 18:06:13.892077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:55.701 [2024-10-25 18:06:13.892084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.892191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.892200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:55.701 [2024-10-25 18:06:13.892208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:24:55.701 [2024-10-25 18:06:13.892215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.892235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.892243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:55.701 [2024-10-25 18:06:13.892250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:55.701 [2024-10-25 18:06:13.892257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.892284] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:55.701 [2024-10-25 18:06:13.892297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.892304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:55.701 [2024-10-25 18:06:13.892312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:55.701 [2024-10-25 18:06:13.892318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.892372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:55.701 [2024-10-25 18:06:13.892380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:55.701 [2024-10-25 18:06:13.892388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:24:55.701 [2024-10-25 18:06:13.892395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:55.701 [2024-10-25 18:06:13.893408] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1271.577 ms, result 0 00:24:55.701 [2024-10-25 18:06:13.905786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.701 [2024-10-25 18:06:13.921778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:55.701 [2024-10-25 18:06:13.930143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:55.701 Validate MD5 checksum, iteration 1 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:55.701 18:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:55.701 [2024-10-25 18:06:14.089417] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:24:55.701 [2024-10-25 18:06:14.089527] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78405 ] 00:24:55.978 [2024-10-25 18:06:14.249059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.978 [2024-10-25 18:06:14.345332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.889  [2024-10-25T18:06:16.584Z] Copying: 676/1024 [MB] (676 MBps) [2024-10-25T18:06:21.864Z] Copying: 1024/1024 [MB] (average 661 MBps) 00:25:03.429 00:25:03.429 18:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:03.429 18:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:04.810 Validate MD5 checksum, iteration 2 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c4d6bae3d44fe099ed98735e76b81b6b 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c4d6bae3d44fe099ed98735e76b81b6b != \c\4\d\6\b\a\e\3\d\4\4\f\e\0\9\9\e\d\9\8\7\3\5\e\7\6\b\8\1\b\6\b ]] 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:04.810 18:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:04.810 [2024-10-25 18:06:23.078427] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:25:04.810 [2024-10-25 18:06:23.078606] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78508 ] 00:25:05.069 [2024-10-25 18:06:23.255828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.069 [2024-10-25 18:06:23.353990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.443  [2024-10-25T18:06:25.812Z] Copying: 625/1024 [MB] (625 MBps) [2024-10-25T18:06:29.095Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:25:10.660 00:25:10.660 18:06:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:10.660 18:06:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ddf9cd651c2208286f17941dcbb56589 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ddf9cd651c2208286f17941dcbb56589 != \d\d\f\9\c\d\6\5\1\c\2\2\0\8\2\8\6\f\1\7\9\4\1\d\c\b\b\5\6\5\8\9 ]] 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78371 ]] 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78371 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78371 ']' 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78371 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78371 00:25:12.562 killing process with pid 78371 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78371' 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78371 00:25:12.562 18:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78371 00:25:12.822 [2024-10-25 18:06:31.189822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:12.822 [2024-10-25 18:06:31.202913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.202952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:12.822 [2024-10-25 18:06:31.202964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:12.822 [2024-10-25 18:06:31.202971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.202990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:12.822 [2024-10-25 18:06:31.205193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.205219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:12.822 [2024-10-25 18:06:31.205228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.191 ms 00:25:12.822 [2024-10-25 18:06:31.205238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.205431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.205446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:12.822 [2024-10-25 18:06:31.205453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:25:12.822 [2024-10-25 18:06:31.205459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.206574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.206600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:12.822 [2024-10-25 18:06:31.206608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.102 ms 00:25:12.822 [2024-10-25 18:06:31.206615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.207469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.207493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:12.822 [2024-10-25 18:06:31.207501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.830 ms 00:25:12.822 [2024-10-25 18:06:31.207507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.215073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.215102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:12.822 [2024-10-25 18:06:31.215110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.529 ms 00:25:12.822 [2024-10-25 18:06:31.215117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.219343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.219370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:12.822 [2024-10-25 18:06:31.219379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.194 ms 00:25:12.822 [2024-10-25 18:06:31.219388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.219452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.219460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:12.822 [2024-10-25 18:06:31.219467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:12.822 [2024-10-25 18:06:31.219474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.226503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.226530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:25:12.822 [2024-10-25 18:06:31.226537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.016 ms 00:25:12.822 [2024-10-25 18:06:31.226544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.233595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.233621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:25:12.822 [2024-10-25 18:06:31.233628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.019 ms 00:25:12.822 [2024-10-25 18:06:31.233634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.240525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.240552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:12.822 [2024-10-25 18:06:31.240566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.864 ms 00:25:12.822 [2024-10-25 18:06:31.240572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.247858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.247886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:12.822 [2024-10-25 18:06:31.247893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.165 ms 00:25:12.822 [2024-10-25 18:06:31.247899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.822 [2024-10-25 18:06:31.247925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:12.822 [2024-10-25 18:06:31.247942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:12.822 [2024-10-25 18:06:31.247950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:12.822 [2024-10-25 18:06:31.247957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:12.822 [2024-10-25 18:06:31.247963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.247970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.247976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.247982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.247987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.247994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:12.822 [2024-10-25 18:06:31.248054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:12.822 [2024-10-25 18:06:31.248060] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bb14d880-e949-4bda-a944-28dc5de98510 00:25:12.822 [2024-10-25 18:06:31.248067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:12.822 [2024-10-25 18:06:31.248073] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:12.822 [2024-10-25 18:06:31.248079] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:12.822 [2024-10-25 18:06:31.248086] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:12.822 [2024-10-25 18:06:31.248091] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:12.822 [2024-10-25 18:06:31.248096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:12.822 [2024-10-25 18:06:31.248102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:12.822 [2024-10-25 18:06:31.248107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:12.822 [2024-10-25 18:06:31.248112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:12.822 [2024-10-25 18:06:31.248117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.822 [2024-10-25 18:06:31.248124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:12.822 [2024-10-25 18:06:31.248133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:25:12.822 [2024-10-25 18:06:31.248141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.258099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.081 [2024-10-25 18:06:31.258122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:13.081 [2024-10-25 18:06:31.258131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.944 ms 00:25:13.081 [2024-10-25 18:06:31.258137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.258426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.081 [2024-10-25 18:06:31.258444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:13.081 [2024-10-25 18:06:31.258451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:25:13.081 [2024-10-25 18:06:31.258457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.292937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.292966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:13.081 [2024-10-25 18:06:31.292975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.292981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.293008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.293018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:13.081 [2024-10-25 18:06:31.293025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.293031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.293101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.293109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:13.081 [2024-10-25 18:06:31.293116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.293123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.293139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.293146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:13.081 [2024-10-25 18:06:31.293155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.293161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.356143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.356177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:13.081 [2024-10-25 18:06:31.356187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.356193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.407160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.407201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:13.081 [2024-10-25 18:06:31.407215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.407222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.407285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.407294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:13.081 [2024-10-25 18:06:31.407300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.407307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.407359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.407368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:13.081 [2024-10-25 18:06:31.407374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.407389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.407464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.407476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:13.081 [2024-10-25 18:06:31.407482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.407488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.407514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.407522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:13.081 [2024-10-25 18:06:31.407528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.081 [2024-10-25 18:06:31.407534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.081 [2024-10-25 18:06:31.407579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.081 [2024-10-25 18:06:31.407586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:13.081 [2024-10-25 18:06:31.407593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.082 [2024-10-25 18:06:31.407600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.082 [2024-10-25 18:06:31.407637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.082 [2024-10-25 18:06:31.407645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:13.082 [2024-10-25 18:06:31.407652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.082 [2024-10-25 18:06:31.407660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.082 [2024-10-25 18:06:31.407767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 204.826 ms, result 0 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:13.649 Remove shared memory files 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:13.649 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:13.908 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78174 00:25:13.908 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:13.908 18:06:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:13.908 ************************************ 00:25:13.908 END TEST ftl_upgrade_shutdown 00:25:13.908 ************************************ 00:25:13.908 00:25:13.908 real 1m22.121s 00:25:13.908 user 1m51.979s 00:25:13.908 sys 0m18.910s 00:25:13.908 18:06:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:13.908 18:06:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@14 -- # killprocess 72211 00:25:13.908 Process with pid 72211 is not found 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@950 -- # '[' -z 72211 ']' 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@954 -- # kill -0 72211 00:25:13.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72211) - No such process 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72211 is not found' 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=78638 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@20 -- # waitforlisten 78638 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@831 -- # '[' -z 78638 ']' 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.908 18:06:32 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.908 18:06:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:13.908 [2024-10-25 18:06:32.204845] Starting SPDK v25.01-pre git sha1 e83d2213a / DPDK 24.03.0 initialization... 00:25:13.908 [2024-10-25 18:06:32.204972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78638 ] 00:25:14.167 [2024-10-25 18:06:32.369168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.167 [2024-10-25 18:06:32.473481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.734 18:06:33 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.734 18:06:33 ftl -- common/autotest_common.sh@864 -- # return 0 00:25:14.734 18:06:33 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:14.993 nvme0n1 00:25:14.993 18:06:33 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:25:14.993 18:06:33 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:14.993 18:06:33 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:15.252 18:06:33 ftl -- ftl/common.sh@28 -- # stores=ab3c5e56-3f75-4f17-b703-aaef41c1f280 00:25:15.252 18:06:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:25:15.252 18:06:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab3c5e56-3f75-4f17-b703-aaef41c1f280 00:25:15.511 18:06:33 ftl -- ftl/ftl.sh@23 -- # killprocess 78638 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 78638 ']' 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@954 -- # kill -0 78638 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@955 -- # uname 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78638 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.511 killing process with pid 78638 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78638' 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@969 -- # kill 78638 00:25:15.511 18:06:33 ftl -- common/autotest_common.sh@974 -- # wait 78638 00:25:16.897 18:06:35 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:16.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:17.155 Waiting for block devices as requested 00:25:17.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.155 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.155 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.155 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:22.509 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:22.509 18:06:40 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:25:22.509 18:06:40 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:22.509 Remove shared memory files 00:25:22.509 18:06:40 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:25:22.509 18:06:40 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:25:22.509 18:06:40 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:25:22.509 18:06:40 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:22.509 18:06:40 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:25:22.509 00:25:22.509 real 9m15.818s 00:25:22.509 user 11m13.876s 00:25:22.509 sys 1m29.288s 00:25:22.509 18:06:40 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.509 ************************************ 00:25:22.509 END TEST ftl 00:25:22.509 ************************************ 00:25:22.509 18:06:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:22.509 18:06:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:22.509 18:06:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:22.509 18:06:40 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:22.509 18:06:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:22.509 18:06:40 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:22.509 18:06:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:22.509 18:06:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:22.509 18:06:40 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:22.509 18:06:40 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:22.509 18:06:40 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:22.509 18:06:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:22.509 18:06:40 -- common/autotest_common.sh@10 -- # set +x 00:25:22.509 18:06:40 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:22.509 18:06:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:22.509 18:06:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:22.509 18:06:40 -- common/autotest_common.sh@10 -- # set +x 00:25:23.890 INFO: APP EXITING 00:25:23.890 INFO: killing all VMs 00:25:23.890 INFO: killing vhost app 00:25:23.890 INFO: EXIT DONE 00:25:23.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.149 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:24.149 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:24.408 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:24.408 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:24.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.926 Cleaning 00:25:24.926 Removing: /var/run/dpdk/spdk0/config 00:25:24.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:24.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:24.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:24.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:24.926 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:24.926 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:24.926 Removing: /var/run/dpdk/spdk0 00:25:24.926 Removing: /var/run/dpdk/spdk_pid56875 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57077 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57284 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57377 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57411 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57534 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57552 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57745 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57838 00:25:24.926 Removing: /var/run/dpdk/spdk_pid57940 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58045 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58137 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58182 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58213 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58289 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58384 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58820 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58873 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58925 00:25:24.926 Removing: /var/run/dpdk/spdk_pid58941 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59032 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59048 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59140 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59156 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59209 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59227 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59280 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59297 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59447 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59484 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59567 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59745 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59823 00:25:24.926 Removing: /var/run/dpdk/spdk_pid59864 00:25:24.926 Removing: /var/run/dpdk/spdk_pid60292 00:25:24.926 Removing: /var/run/dpdk/spdk_pid60390 00:25:24.926 Removing: /var/run/dpdk/spdk_pid60499 00:25:24.926 Removing: /var/run/dpdk/spdk_pid60552 00:25:24.926 Removing: /var/run/dpdk/spdk_pid60572 00:25:24.926 Removing: /var/run/dpdk/spdk_pid60658 00:25:24.926 Removing: /var/run/dpdk/spdk_pid61288 00:25:24.926 Removing: /var/run/dpdk/spdk_pid61325 00:25:24.926 Removing: /var/run/dpdk/spdk_pid61805 00:25:24.926 Removing: /var/run/dpdk/spdk_pid61897 00:25:24.926 Removing: /var/run/dpdk/spdk_pid62012 00:25:24.926 Removing: /var/run/dpdk/spdk_pid62065 00:25:24.926 Removing: /var/run/dpdk/spdk_pid62085 00:25:24.926 Removing: /var/run/dpdk/spdk_pid62116 00:25:24.926 Removing: /var/run/dpdk/spdk_pid63951 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64077 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64081 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64103 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64143 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64148 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64160 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64205 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64209 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64221 00:25:24.926 Removing: /var/run/dpdk/spdk_pid64266 00:25:24.927 Removing: /var/run/dpdk/spdk_pid64270 00:25:24.927 Removing: /var/run/dpdk/spdk_pid64282 00:25:24.927 Removing: /var/run/dpdk/spdk_pid65654 00:25:24.927 Removing: /var/run/dpdk/spdk_pid65758 00:25:24.927 Removing: /var/run/dpdk/spdk_pid67170 00:25:24.927 Removing: /var/run/dpdk/spdk_pid68560 00:25:24.927 Removing: /var/run/dpdk/spdk_pid68655 00:25:24.927 Removing: /var/run/dpdk/spdk_pid68731 00:25:24.927 Removing: /var/run/dpdk/spdk_pid68817 00:25:24.927 Removing: /var/run/dpdk/spdk_pid68918 00:25:24.927 Removing: /var/run/dpdk/spdk_pid68993 00:25:24.927 Removing: /var/run/dpdk/spdk_pid69135 00:25:24.927 Removing: /var/run/dpdk/spdk_pid69492 00:25:24.927 Removing: /var/run/dpdk/spdk_pid69523 00:25:24.927 Removing: /var/run/dpdk/spdk_pid69963 00:25:24.927 Removing: /var/run/dpdk/spdk_pid70140 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70239 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70354 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70407 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70428 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70738 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70795 00:25:25.185 Removing: /var/run/dpdk/spdk_pid70868 00:25:25.185 Removing: /var/run/dpdk/spdk_pid71259 00:25:25.185 Removing: /var/run/dpdk/spdk_pid71408 00:25:25.185 Removing: /var/run/dpdk/spdk_pid72211 00:25:25.185 Removing: /var/run/dpdk/spdk_pid72355 00:25:25.185 Removing: /var/run/dpdk/spdk_pid72535 00:25:25.185 Removing: /var/run/dpdk/spdk_pid72627 00:25:25.185 Removing: /var/run/dpdk/spdk_pid72925 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73174 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73509 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73685 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73782 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73831 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73935 00:25:25.185 Removing: /var/run/dpdk/spdk_pid73960 00:25:25.185 Removing: /var/run/dpdk/spdk_pid74015 00:25:25.185 Removing: /var/run/dpdk/spdk_pid74181 00:25:25.185 Removing: /var/run/dpdk/spdk_pid74389 00:25:25.185 Removing: /var/run/dpdk/spdk_pid74735 00:25:25.185 Removing: /var/run/dpdk/spdk_pid75287 00:25:25.185 Removing: /var/run/dpdk/spdk_pid75705 00:25:25.185 Removing: /var/run/dpdk/spdk_pid76054 00:25:25.185 Removing: /var/run/dpdk/spdk_pid76202 00:25:25.185 Removing: /var/run/dpdk/spdk_pid76288 00:25:25.185 Removing: /var/run/dpdk/spdk_pid76694 00:25:25.185 Removing: /var/run/dpdk/spdk_pid76752 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77049 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77335 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77668 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77779 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77826 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77879 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77939 00:25:25.185 Removing: /var/run/dpdk/spdk_pid77993 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78174 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78248 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78310 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78371 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78405 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78508 00:25:25.186 Removing: /var/run/dpdk/spdk_pid78638 00:25:25.186 Clean 00:25:25.186 18:06:43 -- common/autotest_common.sh@1449 -- # return 0 00:25:25.186 18:06:43 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:25.186 18:06:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.186 18:06:43 -- common/autotest_common.sh@10 -- # set +x 00:25:25.186 18:06:43 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:25.186 18:06:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.186 18:06:43 -- common/autotest_common.sh@10 -- # set +x 00:25:25.186 18:06:43 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:25.186 18:06:43 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:25.186 18:06:43 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:25.186 18:06:43 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:25.186 18:06:43 -- spdk/autotest.sh@394 -- # hostname 00:25:25.186 18:06:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:25.444 geninfo: WARNING: invalid characters removed from testname! 00:25:52.008 18:07:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:53.395 18:07:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.943 18:07:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:57.859 18:07:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:00.405 18:07:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:02.955 18:07:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:05.587 18:07:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:05.587 18:07:23 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:26:05.587 18:07:23 -- common/autotest_common.sh@1689 -- $ lcov --version 00:26:05.587 18:07:23 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:26:05.587 18:07:23 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:26:05.587 18:07:23 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:26:05.587 18:07:23 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:26:05.587 18:07:23 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:26:05.587 18:07:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:26:05.587 18:07:23 -- scripts/common.sh@336 -- $ read -ra ver1 00:26:05.587 18:07:23 -- scripts/common.sh@337 -- $ IFS=.-: 00:26:05.587 18:07:23 -- scripts/common.sh@337 -- $ read -ra ver2 00:26:05.587 18:07:23 -- scripts/common.sh@338 -- $ local 'op=<' 00:26:05.587 18:07:23 -- scripts/common.sh@340 -- $ ver1_l=2 00:26:05.587 18:07:23 -- scripts/common.sh@341 -- $ ver2_l=1 00:26:05.587 18:07:23 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:26:05.587 18:07:23 -- scripts/common.sh@344 -- $ case "$op" in 00:26:05.587 18:07:23 -- scripts/common.sh@345 -- $ : 1 00:26:05.587 18:07:23 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:26:05.587 18:07:23 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.587 18:07:23 -- scripts/common.sh@365 -- $ decimal 1 00:26:05.587 18:07:23 -- scripts/common.sh@353 -- $ local d=1 00:26:05.587 18:07:23 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:26:05.587 18:07:23 -- scripts/common.sh@355 -- $ echo 1 00:26:05.587 18:07:23 -- scripts/common.sh@365 -- $ ver1[v]=1 00:26:05.587 18:07:23 -- scripts/common.sh@366 -- $ decimal 2 00:26:05.587 18:07:23 -- scripts/common.sh@353 -- $ local d=2 00:26:05.587 18:07:23 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:26:05.587 18:07:23 -- scripts/common.sh@355 -- $ echo 2 00:26:05.587 18:07:23 -- scripts/common.sh@366 -- $ ver2[v]=2 00:26:05.587 18:07:23 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:26:05.587 18:07:23 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:26:05.587 18:07:23 -- scripts/common.sh@368 -- $ return 0 00:26:05.587 18:07:23 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.587 18:07:23 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:26:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.587 --rc genhtml_branch_coverage=1 00:26:05.587 --rc genhtml_function_coverage=1 00:26:05.587 --rc genhtml_legend=1 00:26:05.587 --rc geninfo_all_blocks=1 00:26:05.587 --rc geninfo_unexecuted_blocks=1 00:26:05.587 00:26:05.587 ' 00:26:05.587 18:07:23 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:26:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.587 --rc genhtml_branch_coverage=1 00:26:05.587 --rc genhtml_function_coverage=1 00:26:05.587 --rc genhtml_legend=1 00:26:05.587 --rc geninfo_all_blocks=1 00:26:05.587 --rc geninfo_unexecuted_blocks=1 00:26:05.587 00:26:05.587 ' 00:26:05.587 18:07:23 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:26:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.587 --rc genhtml_branch_coverage=1 00:26:05.587 --rc genhtml_function_coverage=1 00:26:05.587 --rc genhtml_legend=1 00:26:05.587 --rc geninfo_all_blocks=1 00:26:05.587 --rc geninfo_unexecuted_blocks=1 00:26:05.587 00:26:05.587 ' 00:26:05.587 18:07:23 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:26:05.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.587 --rc genhtml_branch_coverage=1 00:26:05.587 --rc genhtml_function_coverage=1 00:26:05.587 --rc genhtml_legend=1 00:26:05.587 --rc geninfo_all_blocks=1 00:26:05.587 --rc geninfo_unexecuted_blocks=1 00:26:05.587 00:26:05.587 ' 00:26:05.587 18:07:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.587 18:07:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:05.587 18:07:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:05.587 18:07:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.587 18:07:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.587 18:07:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.588 18:07:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.588 18:07:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.588 18:07:23 -- paths/export.sh@5 -- $ export PATH 00:26:05.588 18:07:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.588 18:07:23 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:05.588 18:07:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:26:05.588 18:07:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729879643.XXXXXX 00:26:05.588 18:07:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729879643.xks3Ak 00:26:05.588 18:07:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:26:05.588 18:07:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:26:05.588 18:07:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:05.588 18:07:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:05.588 18:07:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:05.588 18:07:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:26:05.588 18:07:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:26:05.588 18:07:23 -- common/autotest_common.sh@10 -- $ set +x 00:26:05.588 18:07:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:26:05.588 18:07:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:26:05.588 18:07:23 -- pm/common@17 -- $ local monitor 00:26:05.588 18:07:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.588 18:07:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.588 18:07:23 -- pm/common@25 -- $ sleep 1 00:26:05.588 18:07:23 -- pm/common@21 -- $ date +%s 00:26:05.588 18:07:23 -- pm/common@21 -- $ date +%s 00:26:05.588 18:07:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729879643 00:26:05.588 18:07:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729879643 00:26:05.588 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729879643_collect-cpu-load.pm.log 00:26:05.588 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729879643_collect-vmstat.pm.log 00:26:06.552 18:07:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:26:06.552 18:07:24 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:26:06.552 18:07:24 -- spdk/autopackage.sh@14 -- $ timing_finish 00:26:06.552 18:07:24 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:06.552 18:07:24 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:06.552 18:07:24 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:06.552 18:07:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:06.552 18:07:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:06.552 18:07:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:06.552 18:07:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:06.552 18:07:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:06.552 18:07:24 -- pm/common@44 -- $ pid=80334 00:26:06.552 18:07:24 -- pm/common@50 -- $ kill -TERM 80334 00:26:06.552 18:07:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:06.552 18:07:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:06.552 18:07:24 -- pm/common@44 -- $ pid=80335 00:26:06.552 18:07:24 -- pm/common@50 -- $ kill -TERM 80335 00:26:06.552 + [[ -n 5032 ]] 00:26:06.552 + sudo kill 5032 00:26:06.562 [Pipeline] } 00:26:06.582 [Pipeline] // timeout 00:26:06.588 [Pipeline] } 00:26:06.607 [Pipeline] // stage 00:26:06.611 [Pipeline] } 00:26:06.625 [Pipeline] // catchError 00:26:06.637 [Pipeline] stage 00:26:06.639 [Pipeline] { (Stop VM) 00:26:06.654 [Pipeline] sh 00:26:06.938 + vagrant halt 00:26:09.483 ==> default: Halting domain... 00:26:12.784 [Pipeline] sh 00:26:13.058 + vagrant destroy -f 00:26:15.600 ==> default: Removing domain... 00:26:16.185 [Pipeline] sh 00:26:16.471 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:26:16.481 [Pipeline] } 00:26:16.495 [Pipeline] // stage 00:26:16.499 [Pipeline] } 00:26:16.510 [Pipeline] // dir 00:26:16.515 [Pipeline] } 00:26:16.527 [Pipeline] // wrap 00:26:16.532 [Pipeline] } 00:26:16.543 [Pipeline] // catchError 00:26:16.549 [Pipeline] stage 00:26:16.550 [Pipeline] { (Epilogue) 00:26:16.561 [Pipeline] sh 00:26:16.841 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:22.129 [Pipeline] catchError 00:26:22.131 [Pipeline] { 00:26:22.145 [Pipeline] sh 00:26:22.426 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:22.426 Artifacts sizes are good 00:26:22.435 [Pipeline] } 00:26:22.449 [Pipeline] // catchError 00:26:22.461 [Pipeline] archiveArtifacts 00:26:22.467 Archiving artifacts 00:26:22.581 [Pipeline] cleanWs 00:26:22.590 [WS-CLEANUP] Deleting project workspace... 00:26:22.590 [WS-CLEANUP] Deferred wipeout is used... 00:26:22.623 [WS-CLEANUP] done 00:26:22.624 [Pipeline] } 00:26:22.639 [Pipeline] // stage 00:26:22.643 [Pipeline] } 00:26:22.656 [Pipeline] // node 00:26:22.660 [Pipeline] End of Pipeline 00:26:22.692 Finished: SUCCESS