00:00:00.001 Started by upstream project "autotest-nightly" build number 4335 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3698 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.147 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.148 The recommended git tool is: git 00:00:00.148 using credential 00000000-0000-0000-0000-000000000002 00:00:00.149 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.189 Fetching changes from the remote Git repository 00:00:00.193 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.278 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.278 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.336 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.348 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.361 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.361 > git config core.sparsecheckout # timeout=10 00:00:06.372 > git read-tree -mu HEAD # timeout=10 00:00:06.389 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.413 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.414 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.518 [Pipeline] Start of Pipeline 00:00:06.530 [Pipeline] library 00:00:06.531 Loading library shm_lib@master 00:00:06.531 Library shm_lib@master is cached. Copying from home. 00:00:06.547 [Pipeline] node 00:00:06.561 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.562 [Pipeline] { 00:00:06.572 [Pipeline] catchError 00:00:06.574 [Pipeline] { 00:00:06.585 [Pipeline] wrap 00:00:06.591 [Pipeline] { 00:00:06.596 [Pipeline] stage 00:00:06.598 [Pipeline] { (Prologue) 00:00:06.610 [Pipeline] echo 00:00:06.611 Node: VM-host-SM38 00:00:06.615 [Pipeline] cleanWs 00:00:06.626 [WS-CLEANUP] Deleting project workspace... 00:00:06.626 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.654 [WS-CLEANUP] done 00:00:06.849 [Pipeline] setCustomBuildProperty 00:00:06.949 [Pipeline] httpRequest 00:00:07.292 [Pipeline] echo 00:00:07.293 Sorcerer 10.211.164.20 is alive 00:00:07.302 [Pipeline] retry 00:00:07.303 [Pipeline] { 00:00:07.312 [Pipeline] httpRequest 00:00:07.317 HttpMethod: GET 00:00:07.317 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.318 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.319 Response Code: HTTP/1.1 200 OK 00:00:07.320 Success: Status code 200 is in the accepted range: 200,404 00:00:07.320 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.850 [Pipeline] } 00:00:09.875 [Pipeline] // retry 00:00:09.886 [Pipeline] sh 00:00:10.172 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.189 [Pipeline] httpRequest 00:00:10.603 [Pipeline] echo 00:00:10.604 Sorcerer 10.211.164.20 is alive 00:00:10.612 [Pipeline] retry 00:00:10.614 [Pipeline] { 00:00:10.626 [Pipeline] httpRequest 00:00:10.631 HttpMethod: GET 00:00:10.632 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:10.632 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:10.649 Response Code: HTTP/1.1 200 OK 00:00:10.650 Success: Status code 200 is in the accepted range: 200,404 00:00:10.650 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:17.557 [Pipeline] } 00:01:17.575 [Pipeline] // retry 00:01:17.582 [Pipeline] sh 00:01:17.870 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:20.411 [Pipeline] sh 00:01:20.691 + git -C spdk log --oneline -n5 00:01:20.691 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:01:20.691 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:01:20.691 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:01:20.691 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:01:20.691 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:01:20.712 [Pipeline] writeFile 00:01:20.727 [Pipeline] sh 00:01:21.014 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.027 [Pipeline] sh 00:01:21.311 + cat autorun-spdk.conf 00:01:21.311 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.311 SPDK_TEST_NVME=1 00:01:21.311 SPDK_TEST_FTL=1 00:01:21.311 SPDK_TEST_ISAL=1 00:01:21.311 SPDK_RUN_ASAN=1 00:01:21.311 SPDK_RUN_UBSAN=1 00:01:21.311 SPDK_TEST_XNVME=1 00:01:21.311 SPDK_TEST_NVME_FDP=1 00:01:21.311 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.317 RUN_NIGHTLY=1 00:01:21.319 [Pipeline] } 00:01:21.333 [Pipeline] // stage 00:01:21.350 [Pipeline] stage 00:01:21.353 [Pipeline] { (Run VM) 00:01:21.366 [Pipeline] sh 00:01:21.646 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.646 + echo 'Start stage prepare_nvme.sh' 00:01:21.646 Start stage prepare_nvme.sh 00:01:21.646 + [[ -n 5 ]] 00:01:21.646 + disk_prefix=ex5 00:01:21.646 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:21.646 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:21.646 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:21.646 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.646 ++ SPDK_TEST_NVME=1 00:01:21.646 ++ SPDK_TEST_FTL=1 00:01:21.646 ++ SPDK_TEST_ISAL=1 00:01:21.646 ++ SPDK_RUN_ASAN=1 00:01:21.646 ++ SPDK_RUN_UBSAN=1 00:01:21.646 ++ SPDK_TEST_XNVME=1 00:01:21.646 ++ SPDK_TEST_NVME_FDP=1 00:01:21.646 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.646 ++ RUN_NIGHTLY=1 00:01:21.646 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:21.646 + nvme_files=() 00:01:21.646 + declare -A nvme_files 00:01:21.646 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.646 + nvme_files['nvme.img']=5G 00:01:21.646 + nvme_files['nvme-cmb.img']=5G 00:01:21.646 + nvme_files['nvme-multi0.img']=4G 00:01:21.646 + nvme_files['nvme-multi1.img']=4G 00:01:21.646 + nvme_files['nvme-multi2.img']=4G 00:01:21.646 + nvme_files['nvme-openstack.img']=8G 00:01:21.646 + nvme_files['nvme-zns.img']=5G 00:01:21.646 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.646 + (( SPDK_TEST_FTL == 1 )) 00:01:21.646 + nvme_files["nvme-ftl.img"]=6G 00:01:21.646 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.646 + nvme_files["nvme-fdp.img"]=1G 00:01:21.646 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.646 + for nvme in "${!nvme_files[@]}" 00:01:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:21.646 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.646 + for nvme in "${!nvme_files[@]}" 00:01:21.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:22.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:22.213 + for nvme in "${!nvme_files[@]}" 00:01:22.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:22.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.213 + for nvme in "${!nvme_files[@]}" 00:01:22.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:22.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:22.213 + for nvme in "${!nvme_files[@]}" 00:01:22.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:22.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.472 + for nvme in "${!nvme_files[@]}" 00:01:22.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:22.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.472 + for nvme in "${!nvme_files[@]}" 00:01:22.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:22.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.472 + for nvme in "${!nvme_files[@]}" 00:01:22.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:22.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:22.472 + for nvme in "${!nvme_files[@]}" 00:01:22.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:22.731 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.731 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:22.731 + echo 'End stage prepare_nvme.sh' 00:01:22.731 End stage prepare_nvme.sh 00:01:22.742 [Pipeline] sh 00:01:23.021 + DISTRO=fedora39 00:01:23.021 + CPUS=10 00:01:23.021 + RAM=12288 00:01:23.021 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.021 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:23.021 00:01:23.021 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:23.021 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:23.021 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:23.021 HELP=0 00:01:23.021 DRY_RUN=0 00:01:23.021 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:23.021 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:23.021 NVME_AUTO_CREATE=0 00:01:23.021 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:23.021 NVME_CMB=,,,, 00:01:23.021 NVME_PMR=,,,, 00:01:23.021 NVME_ZNS=,,,, 00:01:23.021 NVME_MS=true,,,, 00:01:23.021 NVME_FDP=,,,on, 00:01:23.021 SPDK_VAGRANT_DISTRO=fedora39 00:01:23.021 SPDK_VAGRANT_VMCPU=10 00:01:23.021 SPDK_VAGRANT_VMRAM=12288 00:01:23.021 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.021 SPDK_VAGRANT_HTTP_PROXY= 00:01:23.021 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.021 SPDK_OPENSTACK_NETWORK=0 00:01:23.021 VAGRANT_PACKAGE_BOX=0 00:01:23.021 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.021 FORCE_DISTRO=true 00:01:23.021 VAGRANT_BOX_VERSION= 00:01:23.021 EXTRA_VAGRANTFILES= 00:01:23.021 NIC_MODEL=e1000 00:01:23.021 00:01:23.021 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:23.021 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:25.590 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.852 ==> default: Creating image (snapshot of base box volume). 00:01:25.852 ==> default: Creating domain with the following settings... 00:01:25.852 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733366756_35a77cf41d1a39b2bb30 00:01:25.852 ==> default: -- Domain type: kvm 00:01:25.852 ==> default: -- Cpus: 10 00:01:25.853 ==> default: -- Feature: acpi 00:01:25.853 ==> default: -- Feature: apic 00:01:25.853 ==> default: -- Feature: pae 00:01:25.853 ==> default: -- Memory: 12288M 00:01:25.853 ==> default: -- Memory Backing: hugepages: 00:01:25.853 ==> default: -- Management MAC: 00:01:25.853 ==> default: -- Loader: 00:01:25.853 ==> default: -- Nvram: 00:01:25.853 ==> default: -- Base box: spdk/fedora39 00:01:25.853 ==> default: -- Storage pool: default 00:01:25.853 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733366756_35a77cf41d1a39b2bb30.img (20G) 00:01:25.853 ==> default: -- Volume Cache: default 00:01:25.853 ==> default: -- Kernel: 00:01:25.853 ==> default: -- Initrd: 00:01:26.116 ==> default: -- Graphics Type: vnc 00:01:26.116 ==> default: -- Graphics Port: -1 00:01:26.116 ==> default: -- Graphics IP: 127.0.0.1 00:01:26.116 ==> default: -- Graphics Password: Not defined 00:01:26.116 ==> default: -- Video Type: cirrus 00:01:26.116 ==> default: -- Video VRAM: 9216 00:01:26.116 ==> default: -- Sound Type: 00:01:26.116 ==> default: -- Keymap: en-us 00:01:26.116 ==> default: -- TPM Path: 00:01:26.116 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:26.116 ==> default: -- Command line args: 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:26.116 ==> default: -> value=-drive, 00:01:26.116 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:26.116 ==> default: -> value=-drive, 00:01:26.116 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:26.116 ==> default: -> value=-drive, 00:01:26.116 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.116 ==> default: -> value=-drive, 00:01:26.116 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.116 ==> default: -> value=-drive, 00:01:26.116 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:26.116 ==> default: -> value=-drive, 00:01:26.116 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:26.116 ==> default: -> value=-device, 00:01:26.116 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.116 ==> default: Creating shared folders metadata... 00:01:26.116 ==> default: Starting domain. 00:01:28.028 ==> default: Waiting for domain to get an IP address... 00:01:50.024 ==> default: Waiting for SSH to become available... 00:01:50.024 ==> default: Configuring and enabling network interfaces... 00:01:52.570 default: SSH address: 192.168.121.86:22 00:01:52.570 default: SSH username: vagrant 00:01:52.570 default: SSH auth method: private key 00:01:54.485 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.623 ==> default: Mounting SSHFS shared folder... 00:02:04.538 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.538 ==> default: Checking Mount.. 00:02:05.923 ==> default: Folder Successfully Mounted! 00:02:05.923 00:02:05.923 SUCCESS! 00:02:05.923 00:02:05.923 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.923 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.923 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.923 00:02:05.934 [Pipeline] } 00:02:05.952 [Pipeline] // stage 00:02:05.963 [Pipeline] dir 00:02:05.964 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:05.966 [Pipeline] { 00:02:05.982 [Pipeline] catchError 00:02:05.984 [Pipeline] { 00:02:06.000 [Pipeline] sh 00:02:06.304 + vagrant ssh-config --host vagrant 00:02:06.304 + sed -ne '/^Host/,$p' 00:02:06.304 + tee ssh_conf 00:02:08.899 Host vagrant 00:02:08.899 HostName 192.168.121.86 00:02:08.899 User vagrant 00:02:08.899 Port 22 00:02:08.899 UserKnownHostsFile /dev/null 00:02:08.899 StrictHostKeyChecking no 00:02:08.899 PasswordAuthentication no 00:02:08.899 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:08.899 IdentitiesOnly yes 00:02:08.899 LogLevel FATAL 00:02:08.899 ForwardAgent yes 00:02:08.899 ForwardX11 yes 00:02:08.899 00:02:08.918 [Pipeline] withEnv 00:02:08.921 [Pipeline] { 00:02:08.938 [Pipeline] sh 00:02:09.224 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:09.224 source /etc/os-release 00:02:09.224 [[ -e /image.version ]] && img=$(< /image.version) 00:02:09.224 # Minimal, systemd-like check. 00:02:09.224 if [[ -e /.dockerenv ]]; then 00:02:09.224 # Clear garbage from the node'\''s name: 00:02:09.224 # agt-er_autotest_547-896 -> autotest_547-896 00:02:09.224 # $HOSTNAME is the actual container id 00:02:09.224 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:09.224 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:09.224 # We can assume this is a mount from a host where container is running, 00:02:09.224 # so fetch its hostname to easily identify the target swarm worker. 00:02:09.224 container="$(< /etc/hostname) ($agent)" 00:02:09.224 else 00:02:09.224 # Fallback 00:02:09.224 container=$agent 00:02:09.224 fi 00:02:09.224 fi 00:02:09.224 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:09.224 ' 00:02:09.499 [Pipeline] } 00:02:09.516 [Pipeline] // withEnv 00:02:09.526 [Pipeline] setCustomBuildProperty 00:02:09.539 [Pipeline] stage 00:02:09.542 [Pipeline] { (Tests) 00:02:09.557 [Pipeline] sh 00:02:09.835 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:10.119 [Pipeline] sh 00:02:10.404 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.681 [Pipeline] timeout 00:02:10.681 Timeout set to expire in 50 min 00:02:10.683 [Pipeline] { 00:02:10.695 [Pipeline] sh 00:02:10.976 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:11.549 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:02:11.564 [Pipeline] sh 00:02:11.910 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:12.186 [Pipeline] sh 00:02:12.471 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.750 [Pipeline] sh 00:02:13.032 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:13.293 ++ readlink -f spdk_repo 00:02:13.293 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.293 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.293 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.293 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.293 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.293 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:13.293 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:13.293 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:13.293 + cd /home/vagrant/spdk_repo 00:02:13.293 + source /etc/os-release 00:02:13.293 ++ NAME='Fedora Linux' 00:02:13.293 ++ VERSION='39 (Cloud Edition)' 00:02:13.293 ++ ID=fedora 00:02:13.293 ++ VERSION_ID=39 00:02:13.293 ++ VERSION_CODENAME= 00:02:13.293 ++ PLATFORM_ID=platform:f39 00:02:13.293 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:13.293 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.293 ++ LOGO=fedora-logo-icon 00:02:13.293 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:13.293 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.293 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:13.293 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.293 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.293 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.293 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:13.293 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.293 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:13.293 ++ SUPPORT_END=2024-11-12 00:02:13.293 ++ VARIANT='Cloud Edition' 00:02:13.293 ++ VARIANT_ID=cloud 00:02:13.293 + uname -a 00:02:13.293 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:13.293 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:13.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:13.816 Hugepages 00:02:13.816 node hugesize free / total 00:02:13.816 node0 1048576kB 0 / 0 00:02:13.816 node0 2048kB 0 / 0 00:02:13.816 00:02:13.816 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.078 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:14.078 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:14.078 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:14.078 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:14.078 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:14.078 + rm -f /tmp/spdk-ld-path 00:02:14.078 + source autorun-spdk.conf 00:02:14.078 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.078 ++ SPDK_TEST_NVME=1 00:02:14.078 ++ SPDK_TEST_FTL=1 00:02:14.078 ++ SPDK_TEST_ISAL=1 00:02:14.078 ++ SPDK_RUN_ASAN=1 00:02:14.078 ++ SPDK_RUN_UBSAN=1 00:02:14.078 ++ SPDK_TEST_XNVME=1 00:02:14.078 ++ SPDK_TEST_NVME_FDP=1 00:02:14.078 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.078 ++ RUN_NIGHTLY=1 00:02:14.078 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.078 + [[ -n '' ]] 00:02:14.078 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.078 + for M in /var/spdk/build-*-manifest.txt 00:02:14.078 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:14.078 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.078 + for M in /var/spdk/build-*-manifest.txt 00:02:14.078 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.078 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.078 + for M in /var/spdk/build-*-manifest.txt 00:02:14.078 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.078 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.078 ++ uname 00:02:14.078 + [[ Linux == \L\i\n\u\x ]] 00:02:14.078 + sudo dmesg -T 00:02:14.078 + sudo dmesg --clear 00:02:14.078 + dmesg_pid=5030 00:02:14.078 + [[ Fedora Linux == FreeBSD ]] 00:02:14.078 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.078 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.078 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.078 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.078 + sudo dmesg -Tw 00:02:14.078 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.078 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.078 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.078 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.078 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.078 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.078 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.078 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.078 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.078 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.078 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.340 02:46:44 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:14.340 02:46:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.340 02:46:44 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:14.340 02:46:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:14.340 02:46:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.340 02:46:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:14.340 02:46:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.340 02:46:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:14.340 02:46:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.340 02:46:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.340 02:46:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.340 02:46:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.340 02:46:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.340 02:46:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.340 02:46:45 -- paths/export.sh@5 -- $ export PATH 00:02:14.340 02:46:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.340 02:46:45 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.340 02:46:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:14.340 02:46:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733366805.XXXXXX 00:02:14.340 02:46:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733366805.W03pyj 00:02:14.340 02:46:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:14.340 02:46:45 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:14.340 02:46:45 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:14.340 02:46:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.340 02:46:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.340 02:46:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:14.340 02:46:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:14.340 02:46:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.340 02:46:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:14.340 02:46:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:14.340 02:46:45 -- pm/common@17 -- $ local monitor 00:02:14.340 02:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.340 02:46:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.340 02:46:45 -- pm/common@25 -- $ sleep 1 00:02:14.340 02:46:45 -- pm/common@21 -- $ date +%s 00:02:14.340 02:46:45 -- pm/common@21 -- $ date +%s 00:02:14.340 02:46:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733366805 00:02:14.340 02:46:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733366805 00:02:14.340 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733366805_collect-cpu-load.pm.log 00:02:14.340 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733366805_collect-vmstat.pm.log 00:02:15.283 02:46:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:15.283 02:46:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:15.283 02:46:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:15.283 02:46:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:15.283 02:46:46 -- spdk/autobuild.sh@16 -- $ date -u 00:02:15.283 Thu Dec 5 02:46:46 AM UTC 2024 00:02:15.283 02:46:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:15.283 v25.01-pre-296-g8d3947977 00:02:15.283 02:46:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:15.283 02:46:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:15.283 02:46:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.283 02:46:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.283 02:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.283 ************************************ 00:02:15.283 START TEST asan 00:02:15.283 ************************************ 00:02:15.283 using asan 00:02:15.283 02:46:46 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:15.283 00:02:15.283 real 0m0.000s 00:02:15.283 user 0m0.000s 00:02:15.283 sys 0m0.000s 00:02:15.283 02:46:46 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.283 ************************************ 00:02:15.283 END TEST asan 00:02:15.283 ************************************ 00:02:15.283 02:46:46 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.544 02:46:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:15.544 02:46:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:15.544 02:46:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.544 02:46:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.544 02:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.544 ************************************ 00:02:15.544 START TEST ubsan 00:02:15.544 ************************************ 00:02:15.544 using ubsan 00:02:15.544 02:46:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:15.544 00:02:15.544 real 0m0.000s 00:02:15.544 user 0m0.000s 00:02:15.544 sys 0m0.000s 00:02:15.544 ************************************ 00:02:15.544 END TEST ubsan 00:02:15.544 ************************************ 00:02:15.544 02:46:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.544 02:46:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.544 02:46:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:15.544 02:46:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:15.544 02:46:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:15.544 02:46:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:15.544 02:46:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:15.544 02:46:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:15.544 02:46:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:15.544 02:46:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:15.544 02:46:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:15.544 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:15.544 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:16.116 Using 'verbs' RDMA provider 00:02:29.307 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:39.310 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:39.310 Creating mk/config.mk...done. 00:02:39.310 Creating mk/cc.flags.mk...done. 00:02:39.310 Type 'make' to build. 00:02:39.310 02:47:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:39.310 02:47:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:39.310 02:47:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:39.310 02:47:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.310 ************************************ 00:02:39.310 START TEST make 00:02:39.310 ************************************ 00:02:39.310 02:47:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:39.310 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:39.310 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:39.310 meson setup builddir \ 00:02:39.310 -Dwith-libaio=enabled \ 00:02:39.310 -Dwith-liburing=enabled \ 00:02:39.310 -Dwith-libvfn=disabled \ 00:02:39.310 -Dwith-spdk=disabled \ 00:02:39.310 -Dexamples=false \ 00:02:39.310 -Dtests=false \ 00:02:39.310 -Dtools=false && \ 00:02:39.310 meson compile -C builddir && \ 00:02:39.310 cd -) 00:02:39.310 make[1]: Nothing to be done for 'all'. 00:02:41.860 The Meson build system 00:02:41.860 Version: 1.5.0 00:02:41.860 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:41.860 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:41.860 Build type: native build 00:02:41.860 Project name: xnvme 00:02:41.860 Project version: 0.7.5 00:02:41.860 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.860 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.860 Host machine cpu family: x86_64 00:02:41.860 Host machine cpu: x86_64 00:02:41.860 Message: host_machine.system: linux 00:02:41.860 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:41.860 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:41.860 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:41.860 Run-time dependency threads found: YES 00:02:41.860 Has header "setupapi.h" : NO 00:02:41.860 Has header "linux/blkzoned.h" : YES 00:02:41.860 Has header "linux/blkzoned.h" : YES (cached) 00:02:41.860 Has header "libaio.h" : YES 00:02:41.860 Library aio found: YES 00:02:41.860 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.860 Run-time dependency liburing found: YES 2.2 00:02:41.860 Dependency libvfn skipped: feature with-libvfn disabled 00:02:41.860 Found CMake: /usr/bin/cmake (3.27.7) 00:02:41.860 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:41.860 Subproject spdk : skipped: feature with-spdk disabled 00:02:41.860 Run-time dependency appleframeworks found: NO (tried framework) 00:02:41.860 Run-time dependency appleframeworks found: NO (tried framework) 00:02:41.860 Library rt found: YES 00:02:41.860 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:41.860 Configuring xnvme_config.h using configuration 00:02:41.860 Configuring xnvme.spec using configuration 00:02:41.860 Run-time dependency bash-completion found: YES 2.11 00:02:41.860 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:41.860 Program cp found: YES (/usr/bin/cp) 00:02:41.860 Build targets in project: 3 00:02:41.860 00:02:41.860 xnvme 0.7.5 00:02:41.860 00:02:41.860 Subprojects 00:02:41.860 spdk : NO Feature 'with-spdk' disabled 00:02:41.860 00:02:41.860 User defined options 00:02:41.860 examples : false 00:02:41.860 tests : false 00:02:41.860 tools : false 00:02:41.860 with-libaio : enabled 00:02:41.860 with-liburing: enabled 00:02:41.860 with-libvfn : disabled 00:02:41.860 with-spdk : disabled 00:02:41.860 00:02:41.860 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.860 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:41.860 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:42.122 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:42.122 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:42.122 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:42.122 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:42.122 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:42.122 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:42.122 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:42.122 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:42.122 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:42.122 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:42.122 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:42.122 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:42.122 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:42.122 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:42.122 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:42.122 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:42.122 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:42.122 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:42.122 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:42.122 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:42.122 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:42.122 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:42.122 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:42.122 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:42.122 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:42.383 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:42.383 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:42.383 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:42.383 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:42.383 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:42.383 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:42.383 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:42.383 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:42.383 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:42.383 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:42.383 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:42.383 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:42.383 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:42.383 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:42.383 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:42.383 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:42.383 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:42.383 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:42.383 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:42.383 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:42.383 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:42.384 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:42.384 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:42.384 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:42.384 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:42.384 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:42.384 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:42.384 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:42.384 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:42.384 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:42.384 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:42.384 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:42.384 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:42.384 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:42.644 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:42.644 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:42.644 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:42.644 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:42.644 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:42.644 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:42.644 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:42.644 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:42.644 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:42.644 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:42.644 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:42.644 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:42.644 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:42.902 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:42.903 [75/76] Linking static target lib/libxnvme.a 00:02:43.162 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:43.162 INFO: autodetecting backend as ninja 00:02:43.162 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:43.162 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:49.716 The Meson build system 00:02:49.716 Version: 1.5.0 00:02:49.716 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:49.716 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:49.716 Build type: native build 00:02:49.716 Program cat found: YES (/usr/bin/cat) 00:02:49.716 Project name: DPDK 00:02:49.716 Project version: 24.03.0 00:02:49.716 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.716 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.716 Host machine cpu family: x86_64 00:02:49.716 Host machine cpu: x86_64 00:02:49.716 Message: ## Building in Developer Mode ## 00:02:49.716 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:49.716 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:49.716 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:49.716 Program python3 found: YES (/usr/bin/python3) 00:02:49.716 Program cat found: YES (/usr/bin/cat) 00:02:49.716 Compiler for C supports arguments -march=native: YES 00:02:49.716 Checking for size of "void *" : 8 00:02:49.716 Checking for size of "void *" : 8 (cached) 00:02:49.716 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:49.716 Library m found: YES 00:02:49.716 Library numa found: YES 00:02:49.716 Has header "numaif.h" : YES 00:02:49.716 Library fdt found: NO 00:02:49.716 Library execinfo found: NO 00:02:49.716 Has header "execinfo.h" : YES 00:02:49.716 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.716 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:49.716 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:49.716 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:49.716 Run-time dependency openssl found: YES 3.1.1 00:02:49.716 Run-time dependency libpcap found: YES 1.10.4 00:02:49.716 Has header "pcap.h" with dependency libpcap: YES 00:02:49.716 Compiler for C supports arguments -Wcast-qual: YES 00:02:49.716 Compiler for C supports arguments -Wdeprecated: YES 00:02:49.716 Compiler for C supports arguments -Wformat: YES 00:02:49.716 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:49.716 Compiler for C supports arguments -Wformat-security: NO 00:02:49.716 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.716 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:49.716 Compiler for C supports arguments -Wnested-externs: YES 00:02:49.716 Compiler for C supports arguments -Wold-style-definition: YES 00:02:49.716 Compiler for C supports arguments -Wpointer-arith: YES 00:02:49.716 Compiler for C supports arguments -Wsign-compare: YES 00:02:49.716 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:49.716 Compiler for C supports arguments -Wundef: YES 00:02:49.716 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.716 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:49.716 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:49.716 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.716 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:49.716 Program objdump found: YES (/usr/bin/objdump) 00:02:49.716 Compiler for C supports arguments -mavx512f: YES 00:02:49.716 Checking if "AVX512 checking" compiles: YES 00:02:49.716 Fetching value of define "__SSE4_2__" : 1 00:02:49.716 Fetching value of define "__AES__" : 1 00:02:49.716 Fetching value of define "__AVX__" : 1 00:02:49.716 Fetching value of define "__AVX2__" : 1 00:02:49.716 Fetching value of define "__AVX512BW__" : 1 00:02:49.716 Fetching value of define "__AVX512CD__" : 1 00:02:49.716 Fetching value of define "__AVX512DQ__" : 1 00:02:49.716 Fetching value of define "__AVX512F__" : 1 00:02:49.716 Fetching value of define "__AVX512VL__" : 1 00:02:49.716 Fetching value of define "__PCLMUL__" : 1 00:02:49.716 Fetching value of define "__RDRND__" : 1 00:02:49.716 Fetching value of define "__RDSEED__" : 1 00:02:49.716 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:49.716 Fetching value of define "__znver1__" : (undefined) 00:02:49.716 Fetching value of define "__znver2__" : (undefined) 00:02:49.716 Fetching value of define "__znver3__" : (undefined) 00:02:49.716 Fetching value of define "__znver4__" : (undefined) 00:02:49.716 Library asan found: YES 00:02:49.717 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:49.717 Message: lib/log: Defining dependency "log" 00:02:49.717 Message: lib/kvargs: Defining dependency "kvargs" 00:02:49.717 Message: lib/telemetry: Defining dependency "telemetry" 00:02:49.717 Library rt found: YES 00:02:49.717 Checking for function "getentropy" : NO 00:02:49.717 Message: lib/eal: Defining dependency "eal" 00:02:49.717 Message: lib/ring: Defining dependency "ring" 00:02:49.717 Message: lib/rcu: Defining dependency "rcu" 00:02:49.717 Message: lib/mempool: Defining dependency "mempool" 00:02:49.717 Message: lib/mbuf: Defining dependency "mbuf" 00:02:49.717 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:49.717 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:49.717 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:49.717 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:49.717 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:49.717 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:49.717 Compiler for C supports arguments -mpclmul: YES 00:02:49.717 Compiler for C supports arguments -maes: YES 00:02:49.717 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.717 Compiler for C supports arguments -mavx512bw: YES 00:02:49.717 Compiler for C supports arguments -mavx512dq: YES 00:02:49.717 Compiler for C supports arguments -mavx512vl: YES 00:02:49.717 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:49.717 Compiler for C supports arguments -mavx2: YES 00:02:49.717 Compiler for C supports arguments -mavx: YES 00:02:49.717 Message: lib/net: Defining dependency "net" 00:02:49.717 Message: lib/meter: Defining dependency "meter" 00:02:49.717 Message: lib/ethdev: Defining dependency "ethdev" 00:02:49.717 Message: lib/pci: Defining dependency "pci" 00:02:49.717 Message: lib/cmdline: Defining dependency "cmdline" 00:02:49.717 Message: lib/hash: Defining dependency "hash" 00:02:49.717 Message: lib/timer: Defining dependency "timer" 00:02:49.717 Message: lib/compressdev: Defining dependency "compressdev" 00:02:49.717 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:49.717 Message: lib/dmadev: Defining dependency "dmadev" 00:02:49.717 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:49.717 Message: lib/power: Defining dependency "power" 00:02:49.717 Message: lib/reorder: Defining dependency "reorder" 00:02:49.717 Message: lib/security: Defining dependency "security" 00:02:49.717 Has header "linux/userfaultfd.h" : YES 00:02:49.717 Has header "linux/vduse.h" : YES 00:02:49.717 Message: lib/vhost: Defining dependency "vhost" 00:02:49.717 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:49.717 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:49.717 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:49.717 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.717 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:49.717 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:49.717 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:49.717 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:49.717 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:49.717 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:49.717 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.717 Configuring doxy-api-html.conf using configuration 00:02:49.717 Configuring doxy-api-man.conf using configuration 00:02:49.717 Program mandb found: YES (/usr/bin/mandb) 00:02:49.717 Program sphinx-build found: NO 00:02:49.717 Configuring rte_build_config.h using configuration 00:02:49.717 Message: 00:02:49.717 ================= 00:02:49.717 Applications Enabled 00:02:49.717 ================= 00:02:49.717 00:02:49.717 apps: 00:02:49.717 00:02:49.717 00:02:49.717 Message: 00:02:49.717 ================= 00:02:49.717 Libraries Enabled 00:02:49.717 ================= 00:02:49.717 00:02:49.717 libs: 00:02:49.717 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.717 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:49.717 cryptodev, dmadev, power, reorder, security, vhost, 00:02:49.717 00:02:49.717 Message: 00:02:49.717 =============== 00:02:49.717 Drivers Enabled 00:02:49.717 =============== 00:02:49.717 00:02:49.717 common: 00:02:49.717 00:02:49.717 bus: 00:02:49.717 pci, vdev, 00:02:49.717 mempool: 00:02:49.717 ring, 00:02:49.717 dma: 00:02:49.717 00:02:49.717 net: 00:02:49.717 00:02:49.717 crypto: 00:02:49.717 00:02:49.717 compress: 00:02:49.717 00:02:49.717 vdpa: 00:02:49.717 00:02:49.717 00:02:49.717 Message: 00:02:49.717 ================= 00:02:49.717 Content Skipped 00:02:49.717 ================= 00:02:49.717 00:02:49.717 apps: 00:02:49.717 dumpcap: explicitly disabled via build config 00:02:49.717 graph: explicitly disabled via build config 00:02:49.717 pdump: explicitly disabled via build config 00:02:49.717 proc-info: explicitly disabled via build config 00:02:49.717 test-acl: explicitly disabled via build config 00:02:49.717 test-bbdev: explicitly disabled via build config 00:02:49.717 test-cmdline: explicitly disabled via build config 00:02:49.717 test-compress-perf: explicitly disabled via build config 00:02:49.717 test-crypto-perf: explicitly disabled via build config 00:02:49.717 test-dma-perf: explicitly disabled via build config 00:02:49.717 test-eventdev: explicitly disabled via build config 00:02:49.717 test-fib: explicitly disabled via build config 00:02:49.717 test-flow-perf: explicitly disabled via build config 00:02:49.717 test-gpudev: explicitly disabled via build config 00:02:49.717 test-mldev: explicitly disabled via build config 00:02:49.717 test-pipeline: explicitly disabled via build config 00:02:49.717 test-pmd: explicitly disabled via build config 00:02:49.717 test-regex: explicitly disabled via build config 00:02:49.717 test-sad: explicitly disabled via build config 00:02:49.717 test-security-perf: explicitly disabled via build config 00:02:49.717 00:02:49.717 libs: 00:02:49.717 argparse: explicitly disabled via build config 00:02:49.717 metrics: explicitly disabled via build config 00:02:49.717 acl: explicitly disabled via build config 00:02:49.717 bbdev: explicitly disabled via build config 00:02:49.717 bitratestats: explicitly disabled via build config 00:02:49.717 bpf: explicitly disabled via build config 00:02:49.717 cfgfile: explicitly disabled via build config 00:02:49.717 distributor: explicitly disabled via build config 00:02:49.717 efd: explicitly disabled via build config 00:02:49.717 eventdev: explicitly disabled via build config 00:02:49.717 dispatcher: explicitly disabled via build config 00:02:49.717 gpudev: explicitly disabled via build config 00:02:49.717 gro: explicitly disabled via build config 00:02:49.717 gso: explicitly disabled via build config 00:02:49.717 ip_frag: explicitly disabled via build config 00:02:49.717 jobstats: explicitly disabled via build config 00:02:49.717 latencystats: explicitly disabled via build config 00:02:49.717 lpm: explicitly disabled via build config 00:02:49.717 member: explicitly disabled via build config 00:02:49.717 pcapng: explicitly disabled via build config 00:02:49.717 rawdev: explicitly disabled via build config 00:02:49.717 regexdev: explicitly disabled via build config 00:02:49.717 mldev: explicitly disabled via build config 00:02:49.717 rib: explicitly disabled via build config 00:02:49.717 sched: explicitly disabled via build config 00:02:49.717 stack: explicitly disabled via build config 00:02:49.717 ipsec: explicitly disabled via build config 00:02:49.717 pdcp: explicitly disabled via build config 00:02:49.717 fib: explicitly disabled via build config 00:02:49.717 port: explicitly disabled via build config 00:02:49.717 pdump: explicitly disabled via build config 00:02:49.717 table: explicitly disabled via build config 00:02:49.717 pipeline: explicitly disabled via build config 00:02:49.717 graph: explicitly disabled via build config 00:02:49.717 node: explicitly disabled via build config 00:02:49.717 00:02:49.717 drivers: 00:02:49.717 common/cpt: not in enabled drivers build config 00:02:49.717 common/dpaax: not in enabled drivers build config 00:02:49.717 common/iavf: not in enabled drivers build config 00:02:49.717 common/idpf: not in enabled drivers build config 00:02:49.717 common/ionic: not in enabled drivers build config 00:02:49.717 common/mvep: not in enabled drivers build config 00:02:49.717 common/octeontx: not in enabled drivers build config 00:02:49.717 bus/auxiliary: not in enabled drivers build config 00:02:49.717 bus/cdx: not in enabled drivers build config 00:02:49.717 bus/dpaa: not in enabled drivers build config 00:02:49.717 bus/fslmc: not in enabled drivers build config 00:02:49.717 bus/ifpga: not in enabled drivers build config 00:02:49.717 bus/platform: not in enabled drivers build config 00:02:49.717 bus/uacce: not in enabled drivers build config 00:02:49.717 bus/vmbus: not in enabled drivers build config 00:02:49.717 common/cnxk: not in enabled drivers build config 00:02:49.717 common/mlx5: not in enabled drivers build config 00:02:49.717 common/nfp: not in enabled drivers build config 00:02:49.717 common/nitrox: not in enabled drivers build config 00:02:49.717 common/qat: not in enabled drivers build config 00:02:49.717 common/sfc_efx: not in enabled drivers build config 00:02:49.717 mempool/bucket: not in enabled drivers build config 00:02:49.717 mempool/cnxk: not in enabled drivers build config 00:02:49.717 mempool/dpaa: not in enabled drivers build config 00:02:49.717 mempool/dpaa2: not in enabled drivers build config 00:02:49.717 mempool/octeontx: not in enabled drivers build config 00:02:49.717 mempool/stack: not in enabled drivers build config 00:02:49.717 dma/cnxk: not in enabled drivers build config 00:02:49.718 dma/dpaa: not in enabled drivers build config 00:02:49.718 dma/dpaa2: not in enabled drivers build config 00:02:49.718 dma/hisilicon: not in enabled drivers build config 00:02:49.718 dma/idxd: not in enabled drivers build config 00:02:49.718 dma/ioat: not in enabled drivers build config 00:02:49.718 dma/skeleton: not in enabled drivers build config 00:02:49.718 net/af_packet: not in enabled drivers build config 00:02:49.718 net/af_xdp: not in enabled drivers build config 00:02:49.718 net/ark: not in enabled drivers build config 00:02:49.718 net/atlantic: not in enabled drivers build config 00:02:49.718 net/avp: not in enabled drivers build config 00:02:49.718 net/axgbe: not in enabled drivers build config 00:02:49.718 net/bnx2x: not in enabled drivers build config 00:02:49.718 net/bnxt: not in enabled drivers build config 00:02:49.718 net/bonding: not in enabled drivers build config 00:02:49.718 net/cnxk: not in enabled drivers build config 00:02:49.718 net/cpfl: not in enabled drivers build config 00:02:49.718 net/cxgbe: not in enabled drivers build config 00:02:49.718 net/dpaa: not in enabled drivers build config 00:02:49.718 net/dpaa2: not in enabled drivers build config 00:02:49.718 net/e1000: not in enabled drivers build config 00:02:49.718 net/ena: not in enabled drivers build config 00:02:49.718 net/enetc: not in enabled drivers build config 00:02:49.718 net/enetfec: not in enabled drivers build config 00:02:49.718 net/enic: not in enabled drivers build config 00:02:49.718 net/failsafe: not in enabled drivers build config 00:02:49.718 net/fm10k: not in enabled drivers build config 00:02:49.718 net/gve: not in enabled drivers build config 00:02:49.718 net/hinic: not in enabled drivers build config 00:02:49.718 net/hns3: not in enabled drivers build config 00:02:49.718 net/i40e: not in enabled drivers build config 00:02:49.718 net/iavf: not in enabled drivers build config 00:02:49.718 net/ice: not in enabled drivers build config 00:02:49.718 net/idpf: not in enabled drivers build config 00:02:49.718 net/igc: not in enabled drivers build config 00:02:49.718 net/ionic: not in enabled drivers build config 00:02:49.718 net/ipn3ke: not in enabled drivers build config 00:02:49.718 net/ixgbe: not in enabled drivers build config 00:02:49.718 net/mana: not in enabled drivers build config 00:02:49.718 net/memif: not in enabled drivers build config 00:02:49.718 net/mlx4: not in enabled drivers build config 00:02:49.718 net/mlx5: not in enabled drivers build config 00:02:49.718 net/mvneta: not in enabled drivers build config 00:02:49.718 net/mvpp2: not in enabled drivers build config 00:02:49.718 net/netvsc: not in enabled drivers build config 00:02:49.718 net/nfb: not in enabled drivers build config 00:02:49.718 net/nfp: not in enabled drivers build config 00:02:49.718 net/ngbe: not in enabled drivers build config 00:02:49.718 net/null: not in enabled drivers build config 00:02:49.718 net/octeontx: not in enabled drivers build config 00:02:49.718 net/octeon_ep: not in enabled drivers build config 00:02:49.718 net/pcap: not in enabled drivers build config 00:02:49.718 net/pfe: not in enabled drivers build config 00:02:49.718 net/qede: not in enabled drivers build config 00:02:49.718 net/ring: not in enabled drivers build config 00:02:49.718 net/sfc: not in enabled drivers build config 00:02:49.718 net/softnic: not in enabled drivers build config 00:02:49.718 net/tap: not in enabled drivers build config 00:02:49.718 net/thunderx: not in enabled drivers build config 00:02:49.718 net/txgbe: not in enabled drivers build config 00:02:49.718 net/vdev_netvsc: not in enabled drivers build config 00:02:49.718 net/vhost: not in enabled drivers build config 00:02:49.718 net/virtio: not in enabled drivers build config 00:02:49.718 net/vmxnet3: not in enabled drivers build config 00:02:49.718 raw/*: missing internal dependency, "rawdev" 00:02:49.718 crypto/armv8: not in enabled drivers build config 00:02:49.718 crypto/bcmfs: not in enabled drivers build config 00:02:49.718 crypto/caam_jr: not in enabled drivers build config 00:02:49.718 crypto/ccp: not in enabled drivers build config 00:02:49.718 crypto/cnxk: not in enabled drivers build config 00:02:49.718 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.718 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.718 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.718 crypto/mlx5: not in enabled drivers build config 00:02:49.718 crypto/mvsam: not in enabled drivers build config 00:02:49.718 crypto/nitrox: not in enabled drivers build config 00:02:49.718 crypto/null: not in enabled drivers build config 00:02:49.718 crypto/octeontx: not in enabled drivers build config 00:02:49.718 crypto/openssl: not in enabled drivers build config 00:02:49.718 crypto/scheduler: not in enabled drivers build config 00:02:49.718 crypto/uadk: not in enabled drivers build config 00:02:49.718 crypto/virtio: not in enabled drivers build config 00:02:49.718 compress/isal: not in enabled drivers build config 00:02:49.718 compress/mlx5: not in enabled drivers build config 00:02:49.718 compress/nitrox: not in enabled drivers build config 00:02:49.718 compress/octeontx: not in enabled drivers build config 00:02:49.718 compress/zlib: not in enabled drivers build config 00:02:49.718 regex/*: missing internal dependency, "regexdev" 00:02:49.718 ml/*: missing internal dependency, "mldev" 00:02:49.718 vdpa/ifc: not in enabled drivers build config 00:02:49.718 vdpa/mlx5: not in enabled drivers build config 00:02:49.718 vdpa/nfp: not in enabled drivers build config 00:02:49.718 vdpa/sfc: not in enabled drivers build config 00:02:49.718 event/*: missing internal dependency, "eventdev" 00:02:49.718 baseband/*: missing internal dependency, "bbdev" 00:02:49.718 gpu/*: missing internal dependency, "gpudev" 00:02:49.718 00:02:49.718 00:02:49.718 Build targets in project: 84 00:02:49.718 00:02:49.718 DPDK 24.03.0 00:02:49.718 00:02:49.718 User defined options 00:02:49.718 buildtype : debug 00:02:49.718 default_library : shared 00:02:49.718 libdir : lib 00:02:49.718 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:49.718 b_sanitize : address 00:02:49.718 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:49.718 c_link_args : 00:02:49.718 cpu_instruction_set: native 00:02:49.718 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:49.718 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:49.718 enable_docs : false 00:02:49.718 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:49.718 enable_kmods : false 00:02:49.718 max_lcores : 128 00:02:49.718 tests : false 00:02:49.718 00:02:49.718 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.718 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:49.718 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.718 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.718 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.718 [4/267] Linking static target lib/librte_kvargs.a 00:02:49.718 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.718 [6/267] Linking static target lib/librte_log.a 00:02:49.718 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.718 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:49.718 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.718 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.718 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.976 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.976 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.976 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:49.976 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.976 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.976 [17/267] Linking static target lib/librte_telemetry.a 00:02:49.976 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.233 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:50.233 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:50.233 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:50.233 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.233 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:50.233 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.233 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:50.233 [26/267] Linking target lib/librte_log.so.24.1 00:02:50.491 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:50.491 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.491 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:50.491 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.491 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:50.491 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:50.491 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:50.491 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.748 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.748 [36/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.748 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:50.748 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.748 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.748 [40/267] Linking target lib/librte_telemetry.so.24.1 00:02:50.748 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.748 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.748 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:51.006 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.006 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.006 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:51.006 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:51.006 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:51.006 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.006 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.264 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:51.264 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.264 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.264 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.264 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.264 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.264 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.264 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.522 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.522 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.522 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.522 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.522 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.781 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.781 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.781 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.781 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.781 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.781 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.039 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.039 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.039 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.039 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.040 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.040 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.040 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.040 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.299 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.299 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.299 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.299 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.574 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.574 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.574 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.574 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.574 [86/267] Linking static target lib/librte_ring.a 00:02:52.574 [87/267] Linking static target lib/librte_eal.a 00:02:52.574 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:52.574 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:52.834 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:52.834 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:52.834 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:52.834 [93/267] Linking static target lib/librte_mempool.a 00:02:52.834 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:52.834 [95/267] Linking static target lib/librte_rcu.a 00:02:52.834 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.094 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.094 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.094 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.094 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.094 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.352 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.352 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.352 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:53.352 [105/267] Linking static target lib/librte_mbuf.a 00:02:53.352 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.352 [107/267] Linking static target lib/librte_meter.a 00:02:53.611 [108/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:53.611 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.611 [110/267] Linking static target lib/librte_net.a 00:02:53.611 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.611 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.611 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.869 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.869 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.870 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.870 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.870 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:54.128 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:54.128 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.128 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:54.385 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.385 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.385 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:54.385 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.385 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.385 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.643 [128/267] Linking static target lib/librte_pci.a 00:02:54.643 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.643 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.643 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.643 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.643 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:54.644 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.644 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.644 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.644 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.644 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:54.644 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:54.922 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:54.922 [141/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.922 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:54.922 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.922 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.922 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.922 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.922 [147/267] Linking static target lib/librte_cmdline.a 00:02:54.922 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.181 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:55.181 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.181 [151/267] Linking static target lib/librte_timer.a 00:02:55.439 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.439 [153/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.439 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.439 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.439 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.439 [157/267] Linking static target lib/librte_compressdev.a 00:02:55.439 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.697 [159/267] Linking static target lib/librte_hash.a 00:02:55.697 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:55.697 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.697 [162/267] Linking static target lib/librte_ethdev.a 00:02:55.697 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:55.697 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.697 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:55.955 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:55.955 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:55.955 [168/267] Linking static target lib/librte_dmadev.a 00:02:55.955 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:55.955 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:56.213 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:56.213 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.472 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.472 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:56.472 [175/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.472 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.472 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.472 [178/267] Linking static target lib/librte_cryptodev.a 00:02:56.472 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.472 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.730 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.730 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:56.730 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.730 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:56.988 [185/267] Linking static target lib/librte_power.a 00:02:56.988 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.988 [187/267] Linking static target lib/librte_reorder.a 00:02:56.988 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:56.988 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:56.988 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:56.988 [191/267] Linking static target lib/librte_security.a 00:02:57.245 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.245 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.503 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.503 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.761 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:57.761 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.761 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.761 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.020 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.020 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.020 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.020 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.278 [204/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:58.278 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.278 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.278 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.278 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.278 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.535 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.535 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.535 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.535 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.535 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.535 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:58.535 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.535 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.535 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:58.793 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.793 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.793 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.793 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.793 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.051 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:59.051 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.051 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.310 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:00.685 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.685 [229/267] Linking target lib/librte_eal.so.24.1 00:03:00.685 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:00.685 [231/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:00.685 [232/267] Linking target lib/librte_meter.so.24.1 00:03:00.685 [233/267] Linking target lib/librte_ring.so.24.1 00:03:00.685 [234/267] Linking target lib/librte_pci.so.24.1 00:03:00.685 [235/267] Linking target lib/librte_timer.so.24.1 00:03:00.685 [236/267] Linking target lib/librte_dmadev.so.24.1 00:03:00.685 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.685 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.685 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.685 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.685 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:00.685 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.685 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:00.685 [244/267] Linking target lib/librte_rcu.so.24.1 00:03:00.942 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.942 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.942 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.942 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:00.942 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.942 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:00.942 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:00.942 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:03:00.942 [253/267] Linking target lib/librte_net.so.24.1 00:03:01.201 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.201 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:01.201 [256/267] Linking target lib/librte_hash.so.24.1 00:03:01.201 [257/267] Linking target lib/librte_security.so.24.1 00:03:01.201 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:01.201 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:01.459 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.459 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:01.459 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:01.717 [263/267] Linking target lib/librte_power.so.24.1 00:03:02.652 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.652 [265/267] Linking static target lib/librte_vhost.a 00:03:04.025 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.025 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:04.025 INFO: autodetecting backend as ninja 00:03:04.025 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:18.896 CC lib/log/log.o 00:03:18.896 CC lib/log/log_flags.o 00:03:18.896 CC lib/log/log_deprecated.o 00:03:18.896 CC lib/ut_mock/mock.o 00:03:18.896 CC lib/ut/ut.o 00:03:18.896 LIB libspdk_ut.a 00:03:18.896 LIB libspdk_ut_mock.a 00:03:18.896 SO libspdk_ut.so.2.0 00:03:18.896 SO libspdk_ut_mock.so.6.0 00:03:18.896 LIB libspdk_log.a 00:03:18.896 SYMLINK libspdk_ut.so 00:03:18.896 SYMLINK libspdk_ut_mock.so 00:03:18.896 SO libspdk_log.so.7.1 00:03:18.896 SYMLINK libspdk_log.so 00:03:18.896 CC lib/ioat/ioat.o 00:03:18.896 CC lib/dma/dma.o 00:03:18.896 CC lib/util/bit_array.o 00:03:18.896 CC lib/util/base64.o 00:03:18.896 CC lib/util/cpuset.o 00:03:18.896 CC lib/util/crc32c.o 00:03:18.896 CC lib/util/crc32.o 00:03:18.896 CC lib/util/crc16.o 00:03:18.896 CXX lib/trace_parser/trace.o 00:03:18.896 CC lib/vfio_user/host/vfio_user_pci.o 00:03:18.896 CC lib/util/crc32_ieee.o 00:03:18.896 CC lib/util/crc64.o 00:03:18.896 CC lib/util/dif.o 00:03:18.896 CC lib/util/fd.o 00:03:18.896 CC lib/util/fd_group.o 00:03:18.896 LIB libspdk_dma.a 00:03:18.896 CC lib/util/file.o 00:03:18.897 SO libspdk_dma.so.5.0 00:03:18.897 CC lib/vfio_user/host/vfio_user.o 00:03:18.897 CC lib/util/hexlify.o 00:03:18.897 SYMLINK libspdk_dma.so 00:03:18.897 CC lib/util/iov.o 00:03:18.897 LIB libspdk_ioat.a 00:03:18.897 CC lib/util/math.o 00:03:18.897 SO libspdk_ioat.so.7.0 00:03:18.897 CC lib/util/net.o 00:03:18.897 SYMLINK libspdk_ioat.so 00:03:18.897 CC lib/util/pipe.o 00:03:18.897 CC lib/util/strerror_tls.o 00:03:18.897 CC lib/util/string.o 00:03:18.897 CC lib/util/uuid.o 00:03:18.897 LIB libspdk_vfio_user.a 00:03:18.897 CC lib/util/xor.o 00:03:18.897 SO libspdk_vfio_user.so.5.0 00:03:18.897 CC lib/util/zipf.o 00:03:18.897 CC lib/util/md5.o 00:03:18.897 SYMLINK libspdk_vfio_user.so 00:03:18.897 LIB libspdk_util.a 00:03:18.897 SO libspdk_util.so.10.1 00:03:18.897 LIB libspdk_trace_parser.a 00:03:18.897 SO libspdk_trace_parser.so.6.0 00:03:18.897 SYMLINK libspdk_util.so 00:03:18.897 SYMLINK libspdk_trace_parser.so 00:03:18.897 CC lib/vmd/vmd.o 00:03:18.897 CC lib/vmd/led.o 00:03:18.897 CC lib/conf/conf.o 00:03:18.897 CC lib/json/json_parse.o 00:03:18.897 CC lib/json/json_util.o 00:03:18.897 CC lib/json/json_write.o 00:03:18.897 CC lib/env_dpdk/env.o 00:03:18.897 CC lib/env_dpdk/memory.o 00:03:18.897 CC lib/idxd/idxd.o 00:03:18.897 CC lib/rdma_utils/rdma_utils.o 00:03:18.897 CC lib/idxd/idxd_user.o 00:03:18.897 LIB libspdk_conf.a 00:03:18.897 CC lib/idxd/idxd_kernel.o 00:03:18.897 SO libspdk_conf.so.6.0 00:03:18.897 CC lib/env_dpdk/pci.o 00:03:18.897 LIB libspdk_rdma_utils.a 00:03:18.897 LIB libspdk_json.a 00:03:18.897 SYMLINK libspdk_conf.so 00:03:18.897 CC lib/env_dpdk/init.o 00:03:18.897 SO libspdk_rdma_utils.so.1.0 00:03:18.897 SO libspdk_json.so.6.0 00:03:19.155 SYMLINK libspdk_rdma_utils.so 00:03:19.155 CC lib/env_dpdk/threads.o 00:03:19.155 SYMLINK libspdk_json.so 00:03:19.155 CC lib/env_dpdk/pci_ioat.o 00:03:19.155 CC lib/env_dpdk/pci_virtio.o 00:03:19.155 CC lib/env_dpdk/pci_vmd.o 00:03:19.155 CC lib/env_dpdk/pci_idxd.o 00:03:19.155 CC lib/rdma_provider/common.o 00:03:19.155 CC lib/env_dpdk/pci_event.o 00:03:19.155 LIB libspdk_vmd.a 00:03:19.155 SO libspdk_vmd.so.6.0 00:03:19.155 SYMLINK libspdk_vmd.so 00:03:19.155 CC lib/env_dpdk/sigbus_handler.o 00:03:19.155 CC lib/env_dpdk/pci_dpdk.o 00:03:19.452 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:19.452 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:19.452 LIB libspdk_idxd.a 00:03:19.452 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:19.452 SO libspdk_idxd.so.12.1 00:03:19.452 SYMLINK libspdk_idxd.so 00:03:19.452 CC lib/jsonrpc/jsonrpc_server.o 00:03:19.452 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:19.452 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:19.452 CC lib/jsonrpc/jsonrpc_client.o 00:03:19.452 LIB libspdk_rdma_provider.a 00:03:19.452 SO libspdk_rdma_provider.so.7.0 00:03:19.710 SYMLINK libspdk_rdma_provider.so 00:03:19.710 LIB libspdk_jsonrpc.a 00:03:19.710 SO libspdk_jsonrpc.so.6.0 00:03:19.710 SYMLINK libspdk_jsonrpc.so 00:03:19.969 LIB libspdk_env_dpdk.a 00:03:19.969 CC lib/rpc/rpc.o 00:03:19.969 SO libspdk_env_dpdk.so.15.1 00:03:20.227 SYMLINK libspdk_env_dpdk.so 00:03:20.227 LIB libspdk_rpc.a 00:03:20.227 SO libspdk_rpc.so.6.0 00:03:20.227 SYMLINK libspdk_rpc.so 00:03:20.486 CC lib/trace/trace.o 00:03:20.486 CC lib/trace/trace_flags.o 00:03:20.486 CC lib/trace/trace_rpc.o 00:03:20.486 CC lib/keyring/keyring_rpc.o 00:03:20.486 CC lib/keyring/keyring.o 00:03:20.486 CC lib/notify/notify.o 00:03:20.486 CC lib/notify/notify_rpc.o 00:03:20.486 LIB libspdk_notify.a 00:03:20.744 SO libspdk_notify.so.6.0 00:03:20.744 LIB libspdk_trace.a 00:03:20.744 SYMLINK libspdk_notify.so 00:03:20.744 LIB libspdk_keyring.a 00:03:20.744 SO libspdk_trace.so.11.0 00:03:20.744 SO libspdk_keyring.so.2.0 00:03:20.744 SYMLINK libspdk_trace.so 00:03:20.744 SYMLINK libspdk_keyring.so 00:03:21.002 CC lib/thread/thread.o 00:03:21.002 CC lib/thread/iobuf.o 00:03:21.002 CC lib/sock/sock.o 00:03:21.002 CC lib/sock/sock_rpc.o 00:03:21.262 LIB libspdk_sock.a 00:03:21.262 SO libspdk_sock.so.10.0 00:03:21.520 SYMLINK libspdk_sock.so 00:03:21.520 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:21.520 CC lib/nvme/nvme_ctrlr.o 00:03:21.520 CC lib/nvme/nvme_fabric.o 00:03:21.520 CC lib/nvme/nvme_pcie_common.o 00:03:21.520 CC lib/nvme/nvme_pcie.o 00:03:21.520 CC lib/nvme/nvme_ns_cmd.o 00:03:21.520 CC lib/nvme/nvme.o 00:03:21.520 CC lib/nvme/nvme_qpair.o 00:03:21.520 CC lib/nvme/nvme_ns.o 00:03:22.454 CC lib/nvme/nvme_quirks.o 00:03:22.454 CC lib/nvme/nvme_transport.o 00:03:22.454 CC lib/nvme/nvme_discovery.o 00:03:22.454 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:22.454 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:22.454 LIB libspdk_thread.a 00:03:22.454 CC lib/nvme/nvme_tcp.o 00:03:22.454 CC lib/nvme/nvme_opal.o 00:03:22.454 SO libspdk_thread.so.11.0 00:03:22.454 CC lib/nvme/nvme_io_msg.o 00:03:22.454 SYMLINK libspdk_thread.so 00:03:22.454 CC lib/nvme/nvme_poll_group.o 00:03:22.713 CC lib/nvme/nvme_zns.o 00:03:22.713 CC lib/nvme/nvme_stubs.o 00:03:22.713 CC lib/nvme/nvme_auth.o 00:03:22.971 CC lib/accel/accel.o 00:03:22.971 CC lib/accel/accel_rpc.o 00:03:22.971 CC lib/nvme/nvme_cuse.o 00:03:22.971 CC lib/nvme/nvme_rdma.o 00:03:23.230 CC lib/blob/blobstore.o 00:03:23.230 CC lib/init/json_config.o 00:03:23.230 CC lib/virtio/virtio.o 00:03:23.230 CC lib/fsdev/fsdev.o 00:03:23.230 CC lib/virtio/virtio_vhost_user.o 00:03:23.489 CC lib/init/subsystem.o 00:03:23.489 CC lib/init/subsystem_rpc.o 00:03:23.489 CC lib/fsdev/fsdev_io.o 00:03:23.489 CC lib/virtio/virtio_vfio_user.o 00:03:23.748 CC lib/init/rpc.o 00:03:23.748 CC lib/fsdev/fsdev_rpc.o 00:03:23.748 LIB libspdk_init.a 00:03:23.748 CC lib/virtio/virtio_pci.o 00:03:23.748 SO libspdk_init.so.6.0 00:03:23.748 CC lib/blob/request.o 00:03:23.748 CC lib/blob/zeroes.o 00:03:23.748 CC lib/blob/blob_bs_dev.o 00:03:23.748 SYMLINK libspdk_init.so 00:03:23.748 CC lib/accel/accel_sw.o 00:03:24.007 LIB libspdk_fsdev.a 00:03:24.007 SO libspdk_fsdev.so.2.0 00:03:24.007 SYMLINK libspdk_fsdev.so 00:03:24.007 LIB libspdk_virtio.a 00:03:24.007 CC lib/event/reactor.o 00:03:24.007 CC lib/event/app.o 00:03:24.007 CC lib/event/log_rpc.o 00:03:24.007 CC lib/event/app_rpc.o 00:03:24.007 CC lib/event/scheduler_static.o 00:03:24.007 SO libspdk_virtio.so.7.0 00:03:24.265 LIB libspdk_accel.a 00:03:24.265 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:24.265 SYMLINK libspdk_virtio.so 00:03:24.265 SO libspdk_accel.so.16.0 00:03:24.265 SYMLINK libspdk_accel.so 00:03:24.265 LIB libspdk_nvme.a 00:03:24.529 CC lib/bdev/bdev_rpc.o 00:03:24.529 CC lib/bdev/scsi_nvme.o 00:03:24.529 CC lib/bdev/bdev.o 00:03:24.529 CC lib/bdev/bdev_zone.o 00:03:24.529 CC lib/bdev/part.o 00:03:24.529 LIB libspdk_event.a 00:03:24.529 SO libspdk_nvme.so.15.0 00:03:24.529 SO libspdk_event.so.14.0 00:03:24.529 SYMLINK libspdk_event.so 00:03:24.787 LIB libspdk_fuse_dispatcher.a 00:03:24.787 SO libspdk_fuse_dispatcher.so.1.0 00:03:24.787 SYMLINK libspdk_nvme.so 00:03:24.787 SYMLINK libspdk_fuse_dispatcher.so 00:03:26.689 LIB libspdk_blob.a 00:03:26.689 SO libspdk_blob.so.12.0 00:03:26.689 LIB libspdk_bdev.a 00:03:26.689 SYMLINK libspdk_blob.so 00:03:26.689 SO libspdk_bdev.so.17.0 00:03:26.948 SYMLINK libspdk_bdev.so 00:03:26.948 CC lib/lvol/lvol.o 00:03:26.948 CC lib/blobfs/blobfs.o 00:03:26.948 CC lib/blobfs/tree.o 00:03:26.948 CC lib/nbd/nbd.o 00:03:26.948 CC lib/nbd/nbd_rpc.o 00:03:26.948 CC lib/ublk/ublk.o 00:03:26.948 CC lib/ublk/ublk_rpc.o 00:03:26.948 CC lib/nvmf/ctrlr.o 00:03:26.948 CC lib/scsi/dev.o 00:03:26.948 CC lib/ftl/ftl_core.o 00:03:27.205 CC lib/ftl/ftl_init.o 00:03:27.205 CC lib/ftl/ftl_layout.o 00:03:27.205 CC lib/ftl/ftl_debug.o 00:03:27.205 CC lib/scsi/lun.o 00:03:27.205 CC lib/scsi/port.o 00:03:27.205 LIB libspdk_nbd.a 00:03:27.205 SO libspdk_nbd.so.7.0 00:03:27.462 CC lib/ftl/ftl_io.o 00:03:27.462 CC lib/ftl/ftl_sb.o 00:03:27.462 SYMLINK libspdk_nbd.so 00:03:27.462 CC lib/ftl/ftl_l2p.o 00:03:27.463 CC lib/ftl/ftl_l2p_flat.o 00:03:27.463 CC lib/ftl/ftl_nv_cache.o 00:03:27.463 CC lib/scsi/scsi.o 00:03:27.463 CC lib/scsi/scsi_bdev.o 00:03:27.463 CC lib/scsi/scsi_pr.o 00:03:27.463 CC lib/ftl/ftl_band.o 00:03:27.463 LIB libspdk_blobfs.a 00:03:27.720 CC lib/nvmf/ctrlr_discovery.o 00:03:27.720 SO libspdk_blobfs.so.11.0 00:03:27.720 LIB libspdk_ublk.a 00:03:27.720 CC lib/nvmf/ctrlr_bdev.o 00:03:27.720 SO libspdk_ublk.so.3.0 00:03:27.720 SYMLINK libspdk_blobfs.so 00:03:27.720 CC lib/nvmf/subsystem.o 00:03:27.720 SYMLINK libspdk_ublk.so 00:03:27.720 CC lib/nvmf/nvmf.o 00:03:27.978 CC lib/scsi/scsi_rpc.o 00:03:27.978 LIB libspdk_lvol.a 00:03:27.978 SO libspdk_lvol.so.11.0 00:03:27.978 CC lib/ftl/ftl_band_ops.o 00:03:27.978 SYMLINK libspdk_lvol.so 00:03:27.978 CC lib/ftl/ftl_writer.o 00:03:27.978 CC lib/nvmf/nvmf_rpc.o 00:03:27.978 CC lib/scsi/task.o 00:03:27.978 CC lib/nvmf/transport.o 00:03:28.236 CC lib/ftl/ftl_rq.o 00:03:28.236 LIB libspdk_scsi.a 00:03:28.236 CC lib/ftl/ftl_reloc.o 00:03:28.236 SO libspdk_scsi.so.9.0 00:03:28.236 CC lib/ftl/ftl_l2p_cache.o 00:03:28.236 SYMLINK libspdk_scsi.so 00:03:28.236 CC lib/ftl/ftl_p2l.o 00:03:28.236 CC lib/ftl/ftl_p2l_log.o 00:03:28.493 CC lib/nvmf/tcp.o 00:03:28.493 CC lib/iscsi/conn.o 00:03:28.751 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.751 CC lib/nvmf/stubs.o 00:03:28.751 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.751 CC lib/vhost/vhost.o 00:03:28.751 CC lib/vhost/vhost_rpc.o 00:03:28.751 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.751 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.751 CC lib/vhost/vhost_scsi.o 00:03:28.751 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.009 CC lib/iscsi/init_grp.o 00:03:29.009 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.009 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.009 CC lib/nvmf/mdns_server.o 00:03:29.009 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.009 CC lib/iscsi/iscsi.o 00:03:29.267 CC lib/iscsi/param.o 00:03:29.267 CC lib/vhost/vhost_blk.o 00:03:29.267 CC lib/vhost/rte_vhost_user.o 00:03:29.267 CC lib/iscsi/portal_grp.o 00:03:29.267 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.525 CC lib/nvmf/rdma.o 00:03:29.525 CC lib/iscsi/tgt_node.o 00:03:29.525 CC lib/iscsi/iscsi_subsystem.o 00:03:29.525 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.525 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.525 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.525 CC lib/iscsi/iscsi_rpc.o 00:03:29.785 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.785 CC lib/ftl/utils/ftl_conf.o 00:03:29.785 CC lib/iscsi/task.o 00:03:29.785 CC lib/nvmf/auth.o 00:03:29.785 CC lib/ftl/utils/ftl_md.o 00:03:29.785 CC lib/ftl/utils/ftl_mempool.o 00:03:30.046 CC lib/ftl/utils/ftl_bitmap.o 00:03:30.046 LIB libspdk_vhost.a 00:03:30.046 CC lib/ftl/utils/ftl_property.o 00:03:30.046 SO libspdk_vhost.so.8.0 00:03:30.046 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:30.046 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.046 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.046 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.046 SYMLINK libspdk_vhost.so 00:03:30.046 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.046 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.305 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:30.305 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.305 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.305 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.305 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.305 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:30.305 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:30.305 CC lib/ftl/base/ftl_base_dev.o 00:03:30.305 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.305 LIB libspdk_iscsi.a 00:03:30.305 CC lib/ftl/ftl_trace.o 00:03:30.305 SO libspdk_iscsi.so.8.0 00:03:30.581 LIB libspdk_ftl.a 00:03:30.581 SYMLINK libspdk_iscsi.so 00:03:30.581 SO libspdk_ftl.so.9.0 00:03:30.839 SYMLINK libspdk_ftl.so 00:03:31.098 LIB libspdk_nvmf.a 00:03:31.356 SO libspdk_nvmf.so.20.0 00:03:31.615 SYMLINK libspdk_nvmf.so 00:03:31.873 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.873 CC module/accel/ioat/accel_ioat.o 00:03:31.873 CC module/accel/dsa/accel_dsa.o 00:03:31.873 CC module/accel/error/accel_error.o 00:03:31.873 CC module/sock/posix/posix.o 00:03:31.873 CC module/accel/iaa/accel_iaa.o 00:03:31.873 CC module/fsdev/aio/fsdev_aio.o 00:03:31.873 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.873 CC module/keyring/file/keyring.o 00:03:31.873 CC module/blob/bdev/blob_bdev.o 00:03:31.873 LIB libspdk_env_dpdk_rpc.a 00:03:31.873 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.873 CC module/keyring/file/keyring_rpc.o 00:03:31.873 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.873 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.873 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.132 CC module/accel/iaa/accel_iaa_rpc.o 00:03:32.132 CC module/accel/error/accel_error_rpc.o 00:03:32.132 LIB libspdk_scheduler_dynamic.a 00:03:32.132 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.132 LIB libspdk_keyring_file.a 00:03:32.132 SO libspdk_keyring_file.so.2.0 00:03:32.132 LIB libspdk_accel_ioat.a 00:03:32.132 LIB libspdk_accel_iaa.a 00:03:32.132 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.132 LIB libspdk_blob_bdev.a 00:03:32.132 SO libspdk_accel_iaa.so.3.0 00:03:32.132 SO libspdk_accel_ioat.so.6.0 00:03:32.132 LIB libspdk_accel_dsa.a 00:03:32.132 SO libspdk_blob_bdev.so.12.0 00:03:32.132 SO libspdk_accel_dsa.so.5.0 00:03:32.132 SYMLINK libspdk_keyring_file.so 00:03:32.132 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:32.132 LIB libspdk_accel_error.a 00:03:32.132 SYMLINK libspdk_accel_ioat.so 00:03:32.132 SYMLINK libspdk_accel_iaa.so 00:03:32.132 CC module/fsdev/aio/linux_aio_mgr.o 00:03:32.132 SO libspdk_accel_error.so.2.0 00:03:32.132 SYMLINK libspdk_blob_bdev.so 00:03:32.132 SYMLINK libspdk_accel_dsa.so 00:03:32.132 CC module/keyring/linux/keyring.o 00:03:32.132 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.390 SYMLINK libspdk_accel_error.so 00:03:32.390 CC module/keyring/linux/keyring_rpc.o 00:03:32.390 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.390 LIB libspdk_keyring_linux.a 00:03:32.390 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.390 SO libspdk_keyring_linux.so.1.0 00:03:32.390 LIB libspdk_fsdev_aio.a 00:03:32.390 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.390 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.390 CC module/bdev/error/vbdev_error.o 00:03:32.390 CC module/bdev/delay/vbdev_delay.o 00:03:32.390 SO libspdk_fsdev_aio.so.1.0 00:03:32.390 LIB libspdk_scheduler_gscheduler.a 00:03:32.390 LIB libspdk_sock_posix.a 00:03:32.390 SYMLINK libspdk_keyring_linux.so 00:03:32.390 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.390 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.390 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.390 CC module/bdev/gpt/gpt.o 00:03:32.390 SO libspdk_sock_posix.so.6.0 00:03:32.390 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.390 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.390 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.390 SYMLINK libspdk_fsdev_aio.so 00:03:32.649 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.649 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.649 SYMLINK libspdk_sock_posix.so 00:03:32.649 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.649 LIB libspdk_blobfs_bdev.a 00:03:32.649 SO libspdk_blobfs_bdev.so.6.0 00:03:32.649 CC module/bdev/malloc/bdev_malloc.o 00:03:32.649 CC module/bdev/null/bdev_null.o 00:03:32.649 LIB libspdk_bdev_gpt.a 00:03:32.649 SYMLINK libspdk_blobfs_bdev.so 00:03:32.649 LIB libspdk_bdev_delay.a 00:03:32.649 CC module/bdev/null/bdev_null_rpc.o 00:03:32.649 SO libspdk_bdev_gpt.so.6.0 00:03:32.649 LIB libspdk_bdev_error.a 00:03:32.649 SO libspdk_bdev_delay.so.6.0 00:03:32.649 SO libspdk_bdev_error.so.6.0 00:03:32.649 CC module/bdev/nvme/bdev_nvme.o 00:03:32.909 SYMLINK libspdk_bdev_gpt.so 00:03:32.909 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.909 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.909 SYMLINK libspdk_bdev_error.so 00:03:32.909 SYMLINK libspdk_bdev_delay.so 00:03:32.909 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.909 CC module/bdev/nvme/nvme_rpc.o 00:03:32.909 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.909 CC module/bdev/nvme/vbdev_opal.o 00:03:32.909 LIB libspdk_bdev_null.a 00:03:32.909 SO libspdk_bdev_null.so.6.0 00:03:32.909 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.909 SYMLINK libspdk_bdev_null.so 00:03:32.909 LIB libspdk_bdev_lvol.a 00:03:32.909 CC module/bdev/raid/bdev_raid.o 00:03:33.170 SO libspdk_bdev_lvol.so.6.0 00:03:33.170 LIB libspdk_bdev_passthru.a 00:03:33.170 CC module/bdev/split/vbdev_split.o 00:03:33.170 SO libspdk_bdev_passthru.so.6.0 00:03:33.170 LIB libspdk_bdev_malloc.a 00:03:33.170 SYMLINK libspdk_bdev_lvol.so 00:03:33.170 CC module/bdev/split/vbdev_split_rpc.o 00:03:33.170 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:33.170 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:33.170 SO libspdk_bdev_malloc.so.6.0 00:03:33.170 CC module/bdev/xnvme/bdev_xnvme.o 00:03:33.170 SYMLINK libspdk_bdev_passthru.so 00:03:33.170 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.170 SYMLINK libspdk_bdev_malloc.so 00:03:33.170 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.170 LIB libspdk_bdev_split.a 00:03:33.170 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.170 CC module/bdev/aio/bdev_aio.o 00:03:33.170 SO libspdk_bdev_split.so.6.0 00:03:33.429 SYMLINK libspdk_bdev_split.so 00:03:33.429 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:33.429 LIB libspdk_bdev_zone_block.a 00:03:33.429 SO libspdk_bdev_zone_block.so.6.0 00:03:33.429 CC module/bdev/ftl/bdev_ftl.o 00:03:33.429 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.429 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.429 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.429 SYMLINK libspdk_bdev_zone_block.so 00:03:33.429 CC module/bdev/raid/raid0.o 00:03:33.429 LIB libspdk_bdev_xnvme.a 00:03:33.429 SO libspdk_bdev_xnvme.so.3.0 00:03:33.429 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.689 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.689 SYMLINK libspdk_bdev_xnvme.so 00:03:33.689 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.689 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.689 LIB libspdk_bdev_ftl.a 00:03:33.689 CC module/bdev/raid/raid1.o 00:03:33.689 LIB libspdk_bdev_aio.a 00:03:33.689 SO libspdk_bdev_ftl.so.6.0 00:03:33.689 SO libspdk_bdev_aio.so.6.0 00:03:33.689 CC module/bdev/raid/concat.o 00:03:33.689 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.689 SYMLINK libspdk_bdev_ftl.so 00:03:33.689 SYMLINK libspdk_bdev_aio.so 00:03:33.689 LIB libspdk_bdev_iscsi.a 00:03:33.689 SO libspdk_bdev_iscsi.so.6.0 00:03:33.951 SYMLINK libspdk_bdev_iscsi.so 00:03:33.951 LIB libspdk_bdev_raid.a 00:03:33.951 SO libspdk_bdev_raid.so.6.0 00:03:33.951 SYMLINK libspdk_bdev_raid.so 00:03:34.211 LIB libspdk_bdev_virtio.a 00:03:34.211 SO libspdk_bdev_virtio.so.6.0 00:03:34.211 SYMLINK libspdk_bdev_virtio.so 00:03:35.595 LIB libspdk_bdev_nvme.a 00:03:35.595 SO libspdk_bdev_nvme.so.7.1 00:03:35.595 SYMLINK libspdk_bdev_nvme.so 00:03:35.855 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.855 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.855 CC module/event/subsystems/keyring/keyring.o 00:03:35.855 CC module/event/subsystems/sock/sock.o 00:03:35.855 CC module/event/subsystems/vmd/vmd.o 00:03:35.855 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.855 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.855 CC module/event/subsystems/fsdev/fsdev.o 00:03:35.855 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.855 LIB libspdk_event_keyring.a 00:03:35.855 LIB libspdk_event_scheduler.a 00:03:36.115 LIB libspdk_event_iobuf.a 00:03:36.116 LIB libspdk_event_sock.a 00:03:36.116 LIB libspdk_event_vhost_blk.a 00:03:36.116 LIB libspdk_event_fsdev.a 00:03:36.116 SO libspdk_event_scheduler.so.4.0 00:03:36.116 SO libspdk_event_keyring.so.1.0 00:03:36.116 SO libspdk_event_sock.so.5.0 00:03:36.116 LIB libspdk_event_vmd.a 00:03:36.116 SO libspdk_event_vhost_blk.so.3.0 00:03:36.116 SO libspdk_event_iobuf.so.3.0 00:03:36.116 SO libspdk_event_fsdev.so.1.0 00:03:36.116 SO libspdk_event_vmd.so.6.0 00:03:36.116 SYMLINK libspdk_event_keyring.so 00:03:36.116 SYMLINK libspdk_event_sock.so 00:03:36.116 SYMLINK libspdk_event_scheduler.so 00:03:36.116 SYMLINK libspdk_event_iobuf.so 00:03:36.116 SYMLINK libspdk_event_vhost_blk.so 00:03:36.116 SYMLINK libspdk_event_fsdev.so 00:03:36.116 SYMLINK libspdk_event_vmd.so 00:03:36.376 CC module/event/subsystems/accel/accel.o 00:03:36.376 LIB libspdk_event_accel.a 00:03:36.376 SO libspdk_event_accel.so.6.0 00:03:36.376 SYMLINK libspdk_event_accel.so 00:03:36.636 CC module/event/subsystems/bdev/bdev.o 00:03:36.894 LIB libspdk_event_bdev.a 00:03:36.894 SO libspdk_event_bdev.so.6.0 00:03:36.894 SYMLINK libspdk_event_bdev.so 00:03:37.151 CC module/event/subsystems/scsi/scsi.o 00:03:37.151 CC module/event/subsystems/nbd/nbd.o 00:03:37.151 CC module/event/subsystems/ublk/ublk.o 00:03:37.151 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.151 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.151 LIB libspdk_event_nbd.a 00:03:37.151 LIB libspdk_event_ublk.a 00:03:37.151 LIB libspdk_event_scsi.a 00:03:37.151 SO libspdk_event_nbd.so.6.0 00:03:37.151 SO libspdk_event_ublk.so.3.0 00:03:37.151 SO libspdk_event_scsi.so.6.0 00:03:37.151 LIB libspdk_event_nvmf.a 00:03:37.151 SYMLINK libspdk_event_nbd.so 00:03:37.151 SYMLINK libspdk_event_scsi.so 00:03:37.151 SYMLINK libspdk_event_ublk.so 00:03:37.409 SO libspdk_event_nvmf.so.6.0 00:03:37.409 SYMLINK libspdk_event_nvmf.so 00:03:37.409 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.409 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.669 LIB libspdk_event_vhost_scsi.a 00:03:37.669 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.669 LIB libspdk_event_iscsi.a 00:03:37.669 SO libspdk_event_iscsi.so.6.0 00:03:37.669 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.669 SYMLINK libspdk_event_iscsi.so 00:03:37.930 SO libspdk.so.6.0 00:03:37.930 SYMLINK libspdk.so 00:03:37.930 CC app/spdk_nvme_identify/identify.o 00:03:37.930 CC app/spdk_nvme_perf/perf.o 00:03:37.930 CC app/spdk_lspci/spdk_lspci.o 00:03:37.930 CXX app/trace/trace.o 00:03:37.930 CC app/trace_record/trace_record.o 00:03:37.930 CC app/nvmf_tgt/nvmf_main.o 00:03:37.930 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.188 CC app/spdk_tgt/spdk_tgt.o 00:03:38.188 CC examples/util/zipf/zipf.o 00:03:38.188 CC test/thread/poller_perf/poller_perf.o 00:03:38.188 LINK spdk_lspci 00:03:38.188 LINK nvmf_tgt 00:03:38.188 LINK spdk_trace_record 00:03:38.188 LINK poller_perf 00:03:38.188 LINK iscsi_tgt 00:03:38.188 LINK spdk_tgt 00:03:38.188 LINK zipf 00:03:38.449 LINK spdk_trace 00:03:38.449 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.449 CC app/spdk_top/spdk_top.o 00:03:38.449 CC app/spdk_dd/spdk_dd.o 00:03:38.449 CC examples/ioat/perf/perf.o 00:03:38.449 CC app/fio/nvme/fio_plugin.o 00:03:38.731 CC test/dma/test_dma/test_dma.o 00:03:38.731 LINK spdk_nvme_discover 00:03:38.731 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.731 CC examples/idxd/perf/perf.o 00:03:38.731 LINK spdk_nvme_perf 00:03:38.731 LINK lsvmd 00:03:38.731 LINK ioat_perf 00:03:38.731 LINK spdk_nvme_identify 00:03:38.731 LINK spdk_dd 00:03:39.029 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.029 CC examples/vmd/led/led.o 00:03:39.029 CC examples/ioat/verify/verify.o 00:03:39.029 LINK idxd_perf 00:03:39.029 LINK interrupt_tgt 00:03:39.029 CC app/vhost/vhost.o 00:03:39.029 LINK led 00:03:39.029 CC examples/thread/thread/thread_ex.o 00:03:39.029 CC app/fio/bdev/fio_plugin.o 00:03:39.029 LINK test_dma 00:03:39.029 LINK spdk_nvme 00:03:39.029 LINK verify 00:03:39.287 LINK vhost 00:03:39.287 CC examples/sock/hello_world/hello_sock.o 00:03:39.287 TEST_HEADER include/spdk/accel.h 00:03:39.287 TEST_HEADER include/spdk/accel_module.h 00:03:39.287 TEST_HEADER include/spdk/assert.h 00:03:39.287 TEST_HEADER include/spdk/barrier.h 00:03:39.287 TEST_HEADER include/spdk/base64.h 00:03:39.287 TEST_HEADER include/spdk/bdev.h 00:03:39.287 TEST_HEADER include/spdk/bdev_module.h 00:03:39.287 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.287 TEST_HEADER include/spdk/bit_array.h 00:03:39.287 TEST_HEADER include/spdk/bit_pool.h 00:03:39.287 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.287 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.287 TEST_HEADER include/spdk/blobfs.h 00:03:39.287 TEST_HEADER include/spdk/blob.h 00:03:39.287 TEST_HEADER include/spdk/conf.h 00:03:39.287 TEST_HEADER include/spdk/config.h 00:03:39.287 TEST_HEADER include/spdk/cpuset.h 00:03:39.287 TEST_HEADER include/spdk/crc16.h 00:03:39.287 TEST_HEADER include/spdk/crc32.h 00:03:39.287 TEST_HEADER include/spdk/crc64.h 00:03:39.287 TEST_HEADER include/spdk/dif.h 00:03:39.287 TEST_HEADER include/spdk/dma.h 00:03:39.287 TEST_HEADER include/spdk/endian.h 00:03:39.287 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.287 TEST_HEADER include/spdk/env.h 00:03:39.287 TEST_HEADER include/spdk/event.h 00:03:39.287 TEST_HEADER include/spdk/fd_group.h 00:03:39.287 TEST_HEADER include/spdk/fd.h 00:03:39.287 TEST_HEADER include/spdk/file.h 00:03:39.287 TEST_HEADER include/spdk/fsdev.h 00:03:39.287 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.287 TEST_HEADER include/spdk/ftl.h 00:03:39.287 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:39.287 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.287 TEST_HEADER include/spdk/hexlify.h 00:03:39.287 TEST_HEADER include/spdk/histogram_data.h 00:03:39.287 TEST_HEADER include/spdk/idxd.h 00:03:39.287 LINK thread 00:03:39.287 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.287 TEST_HEADER include/spdk/init.h 00:03:39.287 TEST_HEADER include/spdk/ioat.h 00:03:39.287 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.287 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.287 TEST_HEADER include/spdk/json.h 00:03:39.287 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.287 TEST_HEADER include/spdk/keyring.h 00:03:39.287 TEST_HEADER include/spdk/keyring_module.h 00:03:39.287 CC test/app/bdev_svc/bdev_svc.o 00:03:39.287 TEST_HEADER include/spdk/likely.h 00:03:39.287 TEST_HEADER include/spdk/log.h 00:03:39.287 TEST_HEADER include/spdk/lvol.h 00:03:39.287 TEST_HEADER include/spdk/md5.h 00:03:39.287 TEST_HEADER include/spdk/memory.h 00:03:39.287 TEST_HEADER include/spdk/mmio.h 00:03:39.287 TEST_HEADER include/spdk/nbd.h 00:03:39.287 TEST_HEADER include/spdk/net.h 00:03:39.287 TEST_HEADER include/spdk/notify.h 00:03:39.287 TEST_HEADER include/spdk/nvme.h 00:03:39.287 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.287 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.287 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.287 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.287 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.287 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.287 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.287 TEST_HEADER include/spdk/nvmf.h 00:03:39.287 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.287 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.287 TEST_HEADER include/spdk/opal.h 00:03:39.287 TEST_HEADER include/spdk/opal_spec.h 00:03:39.287 TEST_HEADER include/spdk/pci_ids.h 00:03:39.287 TEST_HEADER include/spdk/pipe.h 00:03:39.287 TEST_HEADER include/spdk/queue.h 00:03:39.287 TEST_HEADER include/spdk/reduce.h 00:03:39.287 LINK spdk_top 00:03:39.287 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.287 TEST_HEADER include/spdk/rpc.h 00:03:39.287 CC test/event/event_perf/event_perf.o 00:03:39.287 TEST_HEADER include/spdk/scheduler.h 00:03:39.546 TEST_HEADER include/spdk/scsi.h 00:03:39.546 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.546 TEST_HEADER include/spdk/sock.h 00:03:39.546 TEST_HEADER include/spdk/stdinc.h 00:03:39.546 TEST_HEADER include/spdk/string.h 00:03:39.546 TEST_HEADER include/spdk/thread.h 00:03:39.546 TEST_HEADER include/spdk/trace.h 00:03:39.546 TEST_HEADER include/spdk/trace_parser.h 00:03:39.546 TEST_HEADER include/spdk/tree.h 00:03:39.546 TEST_HEADER include/spdk/ublk.h 00:03:39.546 TEST_HEADER include/spdk/util.h 00:03:39.546 TEST_HEADER include/spdk/uuid.h 00:03:39.547 TEST_HEADER include/spdk/version.h 00:03:39.547 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.547 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.547 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.547 TEST_HEADER include/spdk/vhost.h 00:03:39.547 CC test/event/reactor/reactor.o 00:03:39.547 TEST_HEADER include/spdk/vmd.h 00:03:39.547 TEST_HEADER include/spdk/xor.h 00:03:39.547 TEST_HEADER include/spdk/zipf.h 00:03:39.547 CXX test/cpp_headers/accel.o 00:03:39.547 LINK spdk_bdev 00:03:39.547 LINK hello_sock 00:03:39.547 CXX test/cpp_headers/accel_module.o 00:03:39.547 LINK bdev_svc 00:03:39.547 LINK reactor 00:03:39.547 CXX test/cpp_headers/assert.o 00:03:39.547 LINK event_perf 00:03:39.547 CC test/event/reactor_perf/reactor_perf.o 00:03:39.807 CC test/event/app_repeat/app_repeat.o 00:03:39.807 CXX test/cpp_headers/barrier.o 00:03:39.807 CC test/env/vtophys/vtophys.o 00:03:39.807 LINK reactor_perf 00:03:39.807 CC test/event/scheduler/scheduler.o 00:03:39.807 CC examples/accel/perf/accel_perf.o 00:03:39.807 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.807 CC test/app/histogram_perf/histogram_perf.o 00:03:39.807 LINK app_repeat 00:03:39.807 LINK nvme_fuzz 00:03:39.807 CXX test/cpp_headers/base64.o 00:03:39.807 CXX test/cpp_headers/bdev.o 00:03:39.807 LINK mem_callbacks 00:03:39.807 LINK env_dpdk_post_init 00:03:39.807 LINK vtophys 00:03:39.807 LINK histogram_perf 00:03:40.068 CXX test/cpp_headers/bdev_module.o 00:03:40.068 CXX test/cpp_headers/bdev_zone.o 00:03:40.068 LINK scheduler 00:03:40.068 CXX test/cpp_headers/bit_array.o 00:03:40.068 CXX test/cpp_headers/bit_pool.o 00:03:40.068 CXX test/cpp_headers/blob_bdev.o 00:03:40.068 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.068 CC test/env/pci/pci_ut.o 00:03:40.068 CC test/env/memory/memory_ut.o 00:03:40.068 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.328 CC test/rpc_client/rpc_client_test.o 00:03:40.328 LINK accel_perf 00:03:40.328 CXX test/cpp_headers/blobfs.o 00:03:40.328 CC test/nvme/aer/aer.o 00:03:40.328 CC test/blobfs/mkfs/mkfs.o 00:03:40.328 CC test/accel/dif/dif.o 00:03:40.328 LINK rpc_client_test 00:03:40.328 CXX test/cpp_headers/blob.o 00:03:40.586 CC test/lvol/esnap/esnap.o 00:03:40.587 LINK mkfs 00:03:40.587 LINK pci_ut 00:03:40.587 CXX test/cpp_headers/conf.o 00:03:40.587 LINK aer 00:03:40.587 CC examples/blob/hello_world/hello_blob.o 00:03:40.587 CXX test/cpp_headers/config.o 00:03:40.587 CC examples/blob/cli/blobcli.o 00:03:40.845 CXX test/cpp_headers/cpuset.o 00:03:40.845 CC test/nvme/reset/reset.o 00:03:40.845 CC test/nvme/sgl/sgl.o 00:03:40.845 LINK hello_blob 00:03:40.845 CXX test/cpp_headers/crc16.o 00:03:40.845 CC examples/nvme/hello_world/hello_world.o 00:03:40.845 LINK reset 00:03:41.103 CXX test/cpp_headers/crc32.o 00:03:41.103 LINK blobcli 00:03:41.103 LINK sgl 00:03:41.103 CC test/nvme/e2edp/nvme_dp.o 00:03:41.103 LINK dif 00:03:41.103 LINK hello_world 00:03:41.103 CXX test/cpp_headers/crc64.o 00:03:41.103 CC test/nvme/overhead/overhead.o 00:03:41.103 LINK memory_ut 00:03:41.361 CXX test/cpp_headers/dif.o 00:03:41.361 CC test/nvme/err_injection/err_injection.o 00:03:41.361 CC examples/nvme/reconnect/reconnect.o 00:03:41.361 CC test/nvme/startup/startup.o 00:03:41.361 CC test/nvme/reserve/reserve.o 00:03:41.361 LINK nvme_dp 00:03:41.361 CXX test/cpp_headers/dma.o 00:03:41.361 LINK startup 00:03:41.361 LINK err_injection 00:03:41.361 LINK overhead 00:03:41.619 CC test/bdev/bdevio/bdevio.o 00:03:41.619 LINK reserve 00:03:41.619 LINK iscsi_fuzz 00:03:41.619 CXX test/cpp_headers/endian.o 00:03:41.619 CC test/nvme/simple_copy/simple_copy.o 00:03:41.619 LINK reconnect 00:03:41.619 CC test/nvme/connect_stress/connect_stress.o 00:03:41.619 CC test/nvme/boot_partition/boot_partition.o 00:03:41.619 CC test/nvme/compliance/nvme_compliance.o 00:03:41.619 CXX test/cpp_headers/env_dpdk.o 00:03:41.619 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.876 LINK simple_copy 00:03:41.876 LINK boot_partition 00:03:41.876 CXX test/cpp_headers/env.o 00:03:41.876 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.876 LINK connect_stress 00:03:41.876 LINK bdevio 00:03:41.876 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.876 LINK fused_ordering 00:03:41.876 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.876 CXX test/cpp_headers/event.o 00:03:41.876 CC test/nvme/fdp/fdp.o 00:03:41.876 CC test/nvme/cuse/cuse.o 00:03:41.876 LINK nvme_compliance 00:03:41.876 CC examples/nvme/arbitration/arbitration.o 00:03:42.133 LINK doorbell_aers 00:03:42.133 CXX test/cpp_headers/fd_group.o 00:03:42.133 CC test/app/jsoncat/jsoncat.o 00:03:42.133 LINK jsoncat 00:03:42.133 CXX test/cpp_headers/fd.o 00:03:42.133 LINK vhost_fuzz 00:03:42.133 LINK arbitration 00:03:42.390 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:42.390 LINK fdp 00:03:42.390 LINK nvme_manage 00:03:42.390 CC examples/bdev/hello_world/hello_bdev.o 00:03:42.390 CXX test/cpp_headers/file.o 00:03:42.390 CC test/app/stub/stub.o 00:03:42.390 CXX test/cpp_headers/fsdev.o 00:03:42.390 CXX test/cpp_headers/fsdev_module.o 00:03:42.390 CC examples/nvme/hotplug/hotplug.o 00:03:42.390 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.390 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.390 LINK hello_bdev 00:03:42.647 CXX test/cpp_headers/ftl.o 00:03:42.647 LINK hello_fsdev 00:03:42.647 LINK stub 00:03:42.647 LINK hotplug 00:03:42.647 LINK cmb_copy 00:03:42.647 CC examples/nvme/abort/abort.o 00:03:42.647 CXX test/cpp_headers/fuse_dispatcher.o 00:03:42.647 CXX test/cpp_headers/gpt_spec.o 00:03:42.647 CXX test/cpp_headers/hexlify.o 00:03:42.647 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.647 CXX test/cpp_headers/histogram_data.o 00:03:42.647 CXX test/cpp_headers/idxd.o 00:03:42.905 CXX test/cpp_headers/idxd_spec.o 00:03:42.905 CXX test/cpp_headers/init.o 00:03:42.905 CXX test/cpp_headers/ioat.o 00:03:42.905 CXX test/cpp_headers/ioat_spec.o 00:03:42.905 LINK pmr_persistence 00:03:42.905 CXX test/cpp_headers/iscsi_spec.o 00:03:42.905 CXX test/cpp_headers/json.o 00:03:42.905 CXX test/cpp_headers/jsonrpc.o 00:03:42.905 CXX test/cpp_headers/keyring.o 00:03:42.905 CXX test/cpp_headers/keyring_module.o 00:03:43.164 CXX test/cpp_headers/likely.o 00:03:43.164 LINK abort 00:03:43.164 CXX test/cpp_headers/log.o 00:03:43.164 LINK cuse 00:03:43.164 CXX test/cpp_headers/lvol.o 00:03:43.164 CXX test/cpp_headers/md5.o 00:03:43.164 CXX test/cpp_headers/memory.o 00:03:43.164 CXX test/cpp_headers/mmio.o 00:03:43.164 CXX test/cpp_headers/nbd.o 00:03:43.164 CXX test/cpp_headers/net.o 00:03:43.164 LINK bdevperf 00:03:43.164 CXX test/cpp_headers/notify.o 00:03:43.164 CXX test/cpp_headers/nvme.o 00:03:43.164 CXX test/cpp_headers/nvme_intel.o 00:03:43.164 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.164 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.422 CXX test/cpp_headers/nvme_spec.o 00:03:43.422 CXX test/cpp_headers/nvme_zns.o 00:03:43.422 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.422 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.422 CXX test/cpp_headers/nvmf.o 00:03:43.422 CXX test/cpp_headers/nvmf_spec.o 00:03:43.422 CXX test/cpp_headers/nvmf_transport.o 00:03:43.423 CXX test/cpp_headers/opal.o 00:03:43.423 CXX test/cpp_headers/opal_spec.o 00:03:43.423 CXX test/cpp_headers/pci_ids.o 00:03:43.423 CXX test/cpp_headers/pipe.o 00:03:43.423 CC examples/nvmf/nvmf/nvmf.o 00:03:43.423 CXX test/cpp_headers/queue.o 00:03:43.423 CXX test/cpp_headers/reduce.o 00:03:43.423 CXX test/cpp_headers/rpc.o 00:03:43.681 CXX test/cpp_headers/scheduler.o 00:03:43.681 CXX test/cpp_headers/scsi.o 00:03:43.681 CXX test/cpp_headers/scsi_spec.o 00:03:43.681 CXX test/cpp_headers/sock.o 00:03:43.681 CXX test/cpp_headers/stdinc.o 00:03:43.681 CXX test/cpp_headers/string.o 00:03:43.681 CXX test/cpp_headers/thread.o 00:03:43.681 CXX test/cpp_headers/trace.o 00:03:43.681 CXX test/cpp_headers/trace_parser.o 00:03:43.681 CXX test/cpp_headers/tree.o 00:03:43.681 CXX test/cpp_headers/ublk.o 00:03:43.681 CXX test/cpp_headers/util.o 00:03:43.681 CXX test/cpp_headers/uuid.o 00:03:43.681 CXX test/cpp_headers/version.o 00:03:43.681 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.681 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.681 CXX test/cpp_headers/vhost.o 00:03:43.681 LINK nvmf 00:03:43.940 CXX test/cpp_headers/vmd.o 00:03:43.940 CXX test/cpp_headers/xor.o 00:03:43.940 CXX test/cpp_headers/zipf.o 00:03:45.835 LINK esnap 00:03:46.092 00:03:46.092 real 1m7.204s 00:03:46.092 user 6m13.170s 00:03:46.092 sys 1m7.628s 00:03:46.092 02:48:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.092 02:48:16 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.092 ************************************ 00:03:46.092 END TEST make 00:03:46.092 ************************************ 00:03:46.092 02:48:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.092 02:48:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.092 02:48:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.092 02:48:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.092 02:48:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.092 02:48:16 -- pm/common@44 -- $ pid=5073 00:03:46.092 02:48:16 -- pm/common@50 -- $ kill -TERM 5073 00:03:46.092 02:48:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.092 02:48:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.092 02:48:16 -- pm/common@44 -- $ pid=5074 00:03:46.092 02:48:16 -- pm/common@50 -- $ kill -TERM 5074 00:03:46.092 02:48:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:46.092 02:48:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:46.350 02:48:16 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:46.350 02:48:16 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:46.350 02:48:16 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:46.350 02:48:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:46.350 02:48:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.350 02:48:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.350 02:48:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.350 02:48:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.350 02:48:17 -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.350 02:48:17 -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.350 02:48:17 -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.350 02:48:17 -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.350 02:48:17 -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.350 02:48:17 -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.350 02:48:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.350 02:48:17 -- scripts/common.sh@344 -- # case "$op" in 00:03:46.350 02:48:17 -- scripts/common.sh@345 -- # : 1 00:03:46.350 02:48:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.350 02:48:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.350 02:48:17 -- scripts/common.sh@365 -- # decimal 1 00:03:46.350 02:48:17 -- scripts/common.sh@353 -- # local d=1 00:03:46.350 02:48:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.350 02:48:17 -- scripts/common.sh@355 -- # echo 1 00:03:46.350 02:48:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.350 02:48:17 -- scripts/common.sh@366 -- # decimal 2 00:03:46.350 02:48:17 -- scripts/common.sh@353 -- # local d=2 00:03:46.350 02:48:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.350 02:48:17 -- scripts/common.sh@355 -- # echo 2 00:03:46.350 02:48:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.350 02:48:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.350 02:48:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.350 02:48:17 -- scripts/common.sh@368 -- # return 0 00:03:46.350 02:48:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.350 02:48:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:46.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.350 --rc genhtml_branch_coverage=1 00:03:46.350 --rc genhtml_function_coverage=1 00:03:46.350 --rc genhtml_legend=1 00:03:46.350 --rc geninfo_all_blocks=1 00:03:46.350 --rc geninfo_unexecuted_blocks=1 00:03:46.350 00:03:46.350 ' 00:03:46.350 02:48:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:46.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.350 --rc genhtml_branch_coverage=1 00:03:46.350 --rc genhtml_function_coverage=1 00:03:46.350 --rc genhtml_legend=1 00:03:46.350 --rc geninfo_all_blocks=1 00:03:46.350 --rc geninfo_unexecuted_blocks=1 00:03:46.350 00:03:46.350 ' 00:03:46.350 02:48:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:46.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.350 --rc genhtml_branch_coverage=1 00:03:46.350 --rc genhtml_function_coverage=1 00:03:46.350 --rc genhtml_legend=1 00:03:46.350 --rc geninfo_all_blocks=1 00:03:46.350 --rc geninfo_unexecuted_blocks=1 00:03:46.350 00:03:46.350 ' 00:03:46.350 02:48:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:46.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.350 --rc genhtml_branch_coverage=1 00:03:46.350 --rc genhtml_function_coverage=1 00:03:46.350 --rc genhtml_legend=1 00:03:46.350 --rc geninfo_all_blocks=1 00:03:46.350 --rc geninfo_unexecuted_blocks=1 00:03:46.350 00:03:46.350 ' 00:03:46.350 02:48:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:46.350 02:48:17 -- nvmf/common.sh@7 -- # uname -s 00:03:46.350 02:48:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.350 02:48:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.350 02:48:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.350 02:48:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.350 02:48:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.350 02:48:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.350 02:48:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.350 02:48:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.350 02:48:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.350 02:48:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.350 02:48:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:45146dea-42da-4764-9336-d85a2ddead66 00:03:46.350 02:48:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=45146dea-42da-4764-9336-d85a2ddead66 00:03:46.350 02:48:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.350 02:48:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.350 02:48:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.350 02:48:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.350 02:48:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:46.350 02:48:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.350 02:48:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.350 02:48:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.350 02:48:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.350 02:48:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.350 02:48:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.350 02:48:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.350 02:48:17 -- paths/export.sh@5 -- # export PATH 00:03:46.350 02:48:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.350 02:48:17 -- nvmf/common.sh@51 -- # : 0 00:03:46.350 02:48:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.350 02:48:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.350 02:48:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.350 02:48:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.350 02:48:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.350 02:48:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.350 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.350 02:48:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.350 02:48:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.350 02:48:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.350 02:48:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.350 02:48:17 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.350 02:48:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.350 02:48:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.350 02:48:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:46.350 02:48:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.350 02:48:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:46.350 02:48:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.350 02:48:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.350 02:48:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.350 02:48:17 -- spdk/autotest.sh@48 -- # udevadm_pid=54262 00:03:46.350 02:48:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.350 02:48:17 -- pm/common@17 -- # local monitor 00:03:46.350 02:48:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.350 02:48:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.350 02:48:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.350 02:48:17 -- pm/common@25 -- # sleep 1 00:03:46.350 02:48:17 -- pm/common@21 -- # date +%s 00:03:46.350 02:48:17 -- pm/common@21 -- # date +%s 00:03:46.350 02:48:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733366897 00:03:46.350 02:48:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733366897 00:03:46.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733366897_collect-cpu-load.pm.log 00:03:46.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733366897_collect-vmstat.pm.log 00:03:47.298 02:48:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.298 02:48:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.298 02:48:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.298 02:48:18 -- common/autotest_common.sh@10 -- # set +x 00:03:47.298 02:48:18 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.298 02:48:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:47.298 02:48:18 -- common/autotest_common.sh@10 -- # set +x 00:03:47.561 02:48:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:47.561 02:48:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:47.561 02:48:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:47.561 02:48:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:47.561 02:48:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:47.561 02:48:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.561 02:48:18 -- common/autotest_common.sh@1457 -- # uname 00:03:47.561 02:48:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:47.561 02:48:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.561 02:48:18 -- common/autotest_common.sh@1477 -- # uname 00:03:47.561 02:48:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:47.561 02:48:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:47.561 02:48:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:47.561 lcov: LCOV version 1.15 00:03:47.561 02:48:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:02.480 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:02.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:17.380 02:48:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:17.380 02:48:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.380 02:48:46 -- common/autotest_common.sh@10 -- # set +x 00:04:17.380 02:48:46 -- spdk/autotest.sh@78 -- # rm -f 00:04:17.380 02:48:46 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.380 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:17.380 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:17.380 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:17.380 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:17.380 02:48:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:17.380 02:48:47 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:17.380 02:48:47 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:17.380 02:48:47 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:17.380 02:48:47 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:17.380 02:48:47 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:17.380 02:48:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:17.380 02:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:17.380 02:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:17.380 02:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:17.380 02:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:17.380 02:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:17.380 02:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:17.380 02:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:17.380 02:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.380 02:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:17.380 02:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:17.380 02:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.380 02:48:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:17.380 02:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.380 02:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.380 02:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:17.380 02:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:17.380 02:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:17.380 No valid GPT data, bailing 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # pt= 00:04:17.380 02:48:47 -- scripts/common.sh@395 -- # return 1 00:04:17.380 02:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:17.380 1+0 records in 00:04:17.380 1+0 records out 00:04:17.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569191 s, 184 MB/s 00:04:17.380 02:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.380 02:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.380 02:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:17.380 02:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:17.380 02:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:17.380 No valid GPT data, bailing 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # pt= 00:04:17.380 02:48:47 -- scripts/common.sh@395 -- # return 1 00:04:17.380 02:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:17.380 1+0 records in 00:04:17.380 1+0 records out 00:04:17.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264877 s, 39.6 MB/s 00:04:17.380 02:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.380 02:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.380 02:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:17.380 02:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:17.380 02:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:17.380 No valid GPT data, bailing 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # pt= 00:04:17.380 02:48:47 -- scripts/common.sh@395 -- # return 1 00:04:17.380 02:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:17.380 1+0 records in 00:04:17.380 1+0 records out 00:04:17.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595313 s, 176 MB/s 00:04:17.380 02:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.380 02:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.380 02:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:17.380 02:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:17.380 02:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:17.380 No valid GPT data, bailing 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:17.380 02:48:47 -- scripts/common.sh@394 -- # pt= 00:04:17.380 02:48:47 -- scripts/common.sh@395 -- # return 1 00:04:17.380 02:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:17.380 1+0 records in 00:04:17.380 1+0 records out 00:04:17.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0071254 s, 147 MB/s 00:04:17.380 02:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.380 02:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.380 02:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:17.380 02:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:17.380 02:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:17.380 No valid GPT data, bailing 00:04:17.380 02:48:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:17.380 02:48:48 -- scripts/common.sh@394 -- # pt= 00:04:17.380 02:48:48 -- scripts/common.sh@395 -- # return 1 00:04:17.380 02:48:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:17.380 1+0 records in 00:04:17.380 1+0 records out 00:04:17.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643096 s, 163 MB/s 00:04:17.380 02:48:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.380 02:48:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.380 02:48:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:17.380 02:48:48 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:17.380 02:48:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:17.380 No valid GPT data, bailing 00:04:17.380 02:48:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:17.380 02:48:48 -- scripts/common.sh@394 -- # pt= 00:04:17.380 02:48:48 -- scripts/common.sh@395 -- # return 1 00:04:17.380 02:48:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:17.380 1+0 records in 00:04:17.380 1+0 records out 00:04:17.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00592709 s, 177 MB/s 00:04:17.380 02:48:48 -- spdk/autotest.sh@105 -- # sync 00:04:17.380 02:48:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.380 02:48:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.380 02:48:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.292 02:48:49 -- spdk/autotest.sh@111 -- # uname -s 00:04:19.292 02:48:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:19.292 02:48:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:19.292 02:48:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:19.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.122 Hugepages 00:04:20.122 node hugesize free / total 00:04:20.122 node0 1048576kB 0 / 0 00:04:20.122 node0 2048kB 0 / 0 00:04:20.122 00:04:20.122 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.122 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.122 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:20.384 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:20.384 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:20.384 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:20.384 02:48:51 -- spdk/autotest.sh@117 -- # uname -s 00:04:20.384 02:48:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:20.384 02:48:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:20.384 02:48:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.528 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.528 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.528 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.528 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.528 02:48:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:22.470 02:48:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:22.470 02:48:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:22.470 02:48:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:22.470 02:48:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:22.470 02:48:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:22.470 02:48:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:22.470 02:48:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.470 02:48:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.470 02:48:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:22.732 02:48:53 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:22.732 02:48:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:22.732 02:48:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.994 Waiting for block devices as requested 00:04:23.256 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.256 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.256 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.517 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.855 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:28.855 02:48:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.855 02:48:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.855 02:48:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.855 02:48:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:28.855 02:48:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:28.855 02:48:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.855 02:48:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.855 02:48:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.855 02:48:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1543 -- # continue 00:04:28.855 02:48:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.855 02:48:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.855 02:48:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.855 02:48:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.855 02:48:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.855 02:48:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.855 02:48:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.855 02:48:59 -- common/autotest_common.sh@1543 -- # continue 00:04:28.855 02:48:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.855 02:48:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:28.855 02:48:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:28.856 02:48:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.856 02:48:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.856 02:48:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.856 02:48:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1543 -- # continue 00:04:28.856 02:48:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.856 02:48:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:28.856 02:48:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:28.856 02:48:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.856 02:48:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.856 02:48:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.856 02:48:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.856 02:48:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.856 02:48:59 -- common/autotest_common.sh@1543 -- # continue 00:04:28.856 02:48:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:28.856 02:48:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.856 02:48:59 -- common/autotest_common.sh@10 -- # set +x 00:04:28.856 02:48:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:28.856 02:48:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.856 02:48:59 -- common/autotest_common.sh@10 -- # set +x 00:04:28.856 02:48:59 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.690 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.690 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.690 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.690 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.953 02:49:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:29.953 02:49:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.953 02:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.953 02:49:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:29.953 02:49:00 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:29.953 02:49:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.953 02:49:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:29.953 02:49:00 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:29.953 02:49:00 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:29.953 02:49:00 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:29.953 02:49:00 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:29.953 02:49:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:29.953 02:49:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:29.953 02:49:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.953 02:49:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.953 02:49:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.953 02:49:00 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:29.953 02:49:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:29.953 02:49:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.953 02:49:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.953 02:49:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.953 02:49:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.953 02:49:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.953 02:49:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.953 02:49:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:29.953 02:49:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.953 02:49:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.953 02:49:00 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:29.953 02:49:00 -- common/autotest_common.sh@1572 -- # return 0 00:04:29.953 02:49:00 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:29.953 02:49:00 -- common/autotest_common.sh@1580 -- # return 0 00:04:29.953 02:49:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:29.953 02:49:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:29.953 02:49:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.953 02:49:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.953 02:49:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:29.953 02:49:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.953 02:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.953 02:49:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:29.953 02:49:00 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.953 02:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.953 02:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.953 02:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.953 ************************************ 00:04:29.953 START TEST env 00:04:29.953 ************************************ 00:04:29.953 02:49:00 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.215 * Looking for test storage... 00:04:30.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.215 02:49:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.215 02:49:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.215 02:49:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.215 02:49:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.215 02:49:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.215 02:49:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.215 02:49:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.215 02:49:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.215 02:49:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.215 02:49:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.215 02:49:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.215 02:49:00 env -- scripts/common.sh@344 -- # case "$op" in 00:04:30.215 02:49:00 env -- scripts/common.sh@345 -- # : 1 00:04:30.215 02:49:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.215 02:49:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.215 02:49:00 env -- scripts/common.sh@365 -- # decimal 1 00:04:30.215 02:49:00 env -- scripts/common.sh@353 -- # local d=1 00:04:30.215 02:49:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.215 02:49:00 env -- scripts/common.sh@355 -- # echo 1 00:04:30.215 02:49:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.215 02:49:00 env -- scripts/common.sh@366 -- # decimal 2 00:04:30.215 02:49:00 env -- scripts/common.sh@353 -- # local d=2 00:04:30.215 02:49:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.215 02:49:00 env -- scripts/common.sh@355 -- # echo 2 00:04:30.215 02:49:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.215 02:49:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.215 02:49:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.215 02:49:00 env -- scripts/common.sh@368 -- # return 0 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.215 --rc genhtml_branch_coverage=1 00:04:30.215 --rc genhtml_function_coverage=1 00:04:30.215 --rc genhtml_legend=1 00:04:30.215 --rc geninfo_all_blocks=1 00:04:30.215 --rc geninfo_unexecuted_blocks=1 00:04:30.215 00:04:30.215 ' 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.215 --rc genhtml_branch_coverage=1 00:04:30.215 --rc genhtml_function_coverage=1 00:04:30.215 --rc genhtml_legend=1 00:04:30.215 --rc geninfo_all_blocks=1 00:04:30.215 --rc geninfo_unexecuted_blocks=1 00:04:30.215 00:04:30.215 ' 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.215 --rc genhtml_branch_coverage=1 00:04:30.215 --rc genhtml_function_coverage=1 00:04:30.215 --rc genhtml_legend=1 00:04:30.215 --rc geninfo_all_blocks=1 00:04:30.215 --rc geninfo_unexecuted_blocks=1 00:04:30.215 00:04:30.215 ' 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.215 --rc genhtml_branch_coverage=1 00:04:30.215 --rc genhtml_function_coverage=1 00:04:30.215 --rc genhtml_legend=1 00:04:30.215 --rc geninfo_all_blocks=1 00:04:30.215 --rc geninfo_unexecuted_blocks=1 00:04:30.215 00:04:30.215 ' 00:04:30.215 02:49:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.215 02:49:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.215 02:49:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.215 ************************************ 00:04:30.215 START TEST env_memory 00:04:30.215 ************************************ 00:04:30.215 02:49:00 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.215 00:04:30.215 00:04:30.215 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.215 http://cunit.sourceforge.net/ 00:04:30.215 00:04:30.215 00:04:30.215 Suite: memory 00:04:30.215 Test: alloc and free memory map ...[2024-12-05 02:49:01.005394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.215 passed 00:04:30.215 Test: mem map translation ...[2024-12-05 02:49:01.044324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.215 [2024-12-05 02:49:01.044384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.215 [2024-12-05 02:49:01.044461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.215 [2024-12-05 02:49:01.044478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.477 passed 00:04:30.477 Test: mem map registration ...[2024-12-05 02:49:01.112607] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:30.477 [2024-12-05 02:49:01.112654] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:30.478 passed 00:04:30.478 Test: mem map adjacent registrations ...passed 00:04:30.478 00:04:30.478 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.478 suites 1 1 n/a 0 0 00:04:30.478 tests 4 4 4 0 0 00:04:30.478 asserts 152 152 152 0 n/a 00:04:30.478 00:04:30.478 Elapsed time = 0.233 seconds 00:04:30.478 00:04:30.478 real 0m0.267s 00:04:30.478 user 0m0.240s 00:04:30.478 sys 0m0.022s 00:04:30.478 02:49:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.478 ************************************ 00:04:30.478 END TEST env_memory 00:04:30.478 ************************************ 00:04:30.478 02:49:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.478 02:49:01 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.478 02:49:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.478 02:49:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.478 02:49:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.478 ************************************ 00:04:30.478 START TEST env_vtophys 00:04:30.478 ************************************ 00:04:30.478 02:49:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.478 EAL: lib.eal log level changed from notice to debug 00:04:30.478 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.478 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.478 EAL: Maximum logical cores by configuration: 128 00:04:30.478 EAL: Detected CPU lcores: 10 00:04:30.478 EAL: Detected NUMA nodes: 1 00:04:30.478 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.478 EAL: Detected shared linkage of DPDK 00:04:30.478 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.478 EAL: Selected IOVA mode 'PA' 00:04:30.478 EAL: Probing VFIO support... 00:04:30.478 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.478 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.478 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.478 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.478 EAL: Setting up physically contiguous memory... 00:04:30.478 EAL: Setting maximum number of open files to 524288 00:04:30.478 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.478 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.478 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.478 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.478 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.478 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.478 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.478 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.478 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.478 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.478 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.478 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.478 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.478 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.478 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.478 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.478 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.478 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.478 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.478 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.478 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.478 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.478 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.478 EAL: Hugepages will be freed exactly as allocated. 00:04:30.478 EAL: No shared files mode enabled, IPC is disabled 00:04:30.478 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 EAL: TSC frequency is ~2600000 KHz 00:04:30.739 EAL: Main lcore 0 is ready (tid=7f618f6e9a40;cpuset=[0]) 00:04:30.739 EAL: Trying to obtain current memory policy. 00:04:30.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.739 EAL: Restoring previous memory policy: 0 00:04:30.739 EAL: request: mp_malloc_sync 00:04:30.739 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.739 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.739 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.739 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.739 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.739 00:04:30.739 00:04:30.739 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.739 http://cunit.sourceforge.net/ 00:04:30.739 00:04:30.739 00:04:30.739 Suite: components_suite 00:04:31.001 Test: vtophys_malloc_test ...passed 00:04:31.001 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.001 EAL: Restoring previous memory policy: 4 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.001 EAL: Trying to obtain current memory policy. 00:04:31.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.001 EAL: Restoring previous memory policy: 4 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.001 EAL: Trying to obtain current memory policy. 00:04:31.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.001 EAL: Restoring previous memory policy: 4 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.001 EAL: Trying to obtain current memory policy. 00:04:31.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.001 EAL: Restoring previous memory policy: 4 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.001 EAL: Trying to obtain current memory policy. 00:04:31.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.001 EAL: Restoring previous memory policy: 4 00:04:31.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.001 EAL: request: mp_malloc_sync 00:04:31.001 EAL: No shared files mode enabled, IPC is disabled 00:04:31.001 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.263 EAL: request: mp_malloc_sync 00:04:31.263 EAL: No shared files mode enabled, IPC is disabled 00:04:31.263 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.263 EAL: Trying to obtain current memory policy. 00:04:31.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.263 EAL: Restoring previous memory policy: 4 00:04:31.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.263 EAL: request: mp_malloc_sync 00:04:31.263 EAL: No shared files mode enabled, IPC is disabled 00:04:31.263 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.263 EAL: request: mp_malloc_sync 00:04:31.263 EAL: No shared files mode enabled, IPC is disabled 00:04:31.263 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.263 EAL: Trying to obtain current memory policy. 00:04:31.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.263 EAL: Restoring previous memory policy: 4 00:04:31.263 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.263 EAL: request: mp_malloc_sync 00:04:31.263 EAL: No shared files mode enabled, IPC is disabled 00:04:31.263 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.524 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.524 EAL: request: mp_malloc_sync 00:04:31.524 EAL: No shared files mode enabled, IPC is disabled 00:04:31.524 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.784 EAL: Trying to obtain current memory policy. 00:04:31.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.784 EAL: Restoring previous memory policy: 4 00:04:31.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.784 EAL: request: mp_malloc_sync 00:04:31.784 EAL: No shared files mode enabled, IPC is disabled 00:04:31.784 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.045 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.045 EAL: request: mp_malloc_sync 00:04:32.045 EAL: No shared files mode enabled, IPC is disabled 00:04:32.045 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.306 EAL: Trying to obtain current memory policy. 00:04:32.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.306 EAL: Restoring previous memory policy: 4 00:04:32.306 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.306 EAL: request: mp_malloc_sync 00:04:32.306 EAL: No shared files mode enabled, IPC is disabled 00:04:32.306 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.878 EAL: request: mp_malloc_sync 00:04:32.878 EAL: No shared files mode enabled, IPC is disabled 00:04:32.878 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.450 EAL: Trying to obtain current memory policy. 00:04:33.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.711 EAL: Restoring previous memory policy: 4 00:04:33.711 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.711 EAL: request: mp_malloc_sync 00:04:33.711 EAL: No shared files mode enabled, IPC is disabled 00:04:33.711 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.652 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.652 EAL: request: mp_malloc_sync 00:04:34.652 EAL: No shared files mode enabled, IPC is disabled 00:04:34.652 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.593 passed 00:04:35.593 00:04:35.593 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.593 suites 1 1 n/a 0 0 00:04:35.593 tests 2 2 2 0 0 00:04:35.593 asserts 5726 5726 5726 0 n/a 00:04:35.593 00:04:35.593 Elapsed time = 4.807 seconds 00:04:35.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.593 EAL: request: mp_malloc_sync 00:04:35.593 EAL: No shared files mode enabled, IPC is disabled 00:04:35.593 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.593 EAL: No shared files mode enabled, IPC is disabled 00:04:35.593 EAL: No shared files mode enabled, IPC is disabled 00:04:35.593 EAL: No shared files mode enabled, IPC is disabled 00:04:35.593 00:04:35.593 real 0m5.069s 00:04:35.593 user 0m4.272s 00:04:35.593 sys 0m0.651s 00:04:35.593 02:49:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.593 02:49:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:35.593 ************************************ 00:04:35.593 END TEST env_vtophys 00:04:35.593 ************************************ 00:04:35.593 02:49:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.593 02:49:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.593 02:49:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.593 02:49:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.593 ************************************ 00:04:35.593 START TEST env_pci 00:04:35.593 ************************************ 00:04:35.593 02:49:06 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.593 00:04:35.593 00:04:35.593 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.593 http://cunit.sourceforge.net/ 00:04:35.593 00:04:35.593 00:04:35.593 Suite: pci 00:04:35.593 Test: pci_hook ...[2024-12-05 02:49:06.395246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57013 has claimed it 00:04:35.593 passed 00:04:35.593 00:04:35.593 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.593 suites 1 1 n/a 0 0 00:04:35.593 tests 1 1 1 0 0 00:04:35.593 asserts 25 25 25 0 n/a 00:04:35.593 00:04:35.593 Elapsed time = 0.005 seconds 00:04:35.593 EAL: Cannot find device (10000:00:01.0) 00:04:35.593 EAL: Failed to attach device on primary process 00:04:35.593 00:04:35.593 real 0m0.065s 00:04:35.593 user 0m0.029s 00:04:35.593 sys 0m0.035s 00:04:35.593 02:49:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.853 02:49:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:35.853 ************************************ 00:04:35.853 END TEST env_pci 00:04:35.853 ************************************ 00:04:35.853 02:49:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.853 02:49:06 env -- env/env.sh@15 -- # uname 00:04:35.853 02:49:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.853 02:49:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.853 02:49:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.853 02:49:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:35.853 02:49:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.853 02:49:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.853 ************************************ 00:04:35.853 START TEST env_dpdk_post_init 00:04:35.853 ************************************ 00:04:35.853 02:49:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.853 EAL: Detected CPU lcores: 10 00:04:35.853 EAL: Detected NUMA nodes: 1 00:04:35.853 EAL: Detected shared linkage of DPDK 00:04:35.853 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.853 EAL: Selected IOVA mode 'PA' 00:04:35.853 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.853 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:35.853 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:35.853 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:35.853 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:36.114 Starting DPDK initialization... 00:04:36.114 Starting SPDK post initialization... 00:04:36.114 SPDK NVMe probe 00:04:36.114 Attaching to 0000:00:10.0 00:04:36.114 Attaching to 0000:00:11.0 00:04:36.114 Attaching to 0000:00:12.0 00:04:36.114 Attaching to 0000:00:13.0 00:04:36.114 Attached to 0000:00:10.0 00:04:36.114 Attached to 0000:00:11.0 00:04:36.114 Attached to 0000:00:13.0 00:04:36.114 Attached to 0000:00:12.0 00:04:36.114 Cleaning up... 00:04:36.114 00:04:36.114 real 0m0.230s 00:04:36.114 user 0m0.071s 00:04:36.114 sys 0m0.062s 00:04:36.114 02:49:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.114 02:49:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.114 ************************************ 00:04:36.114 END TEST env_dpdk_post_init 00:04:36.114 ************************************ 00:04:36.114 02:49:06 env -- env/env.sh@26 -- # uname 00:04:36.114 02:49:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.114 02:49:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.114 02:49:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.114 02:49:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.114 02:49:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.114 ************************************ 00:04:36.114 START TEST env_mem_callbacks 00:04:36.114 ************************************ 00:04:36.114 02:49:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.114 EAL: Detected CPU lcores: 10 00:04:36.114 EAL: Detected NUMA nodes: 1 00:04:36.114 EAL: Detected shared linkage of DPDK 00:04:36.114 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.114 EAL: Selected IOVA mode 'PA' 00:04:36.114 00:04:36.114 00:04:36.114 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.114 http://cunit.sourceforge.net/ 00:04:36.114 00:04:36.114 00:04:36.114 Suite: memory 00:04:36.114 Test: test ... 00:04:36.114 register 0x200000200000 2097152 00:04:36.114 malloc 3145728 00:04:36.114 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.114 register 0x200000400000 4194304 00:04:36.114 buf 0x2000004fffc0 len 3145728 PASSED 00:04:36.114 malloc 64 00:04:36.114 buf 0x2000004ffec0 len 64 PASSED 00:04:36.114 malloc 4194304 00:04:36.114 register 0x200000800000 6291456 00:04:36.114 buf 0x2000009fffc0 len 4194304 PASSED 00:04:36.114 free 0x2000004fffc0 3145728 00:04:36.114 free 0x2000004ffec0 64 00:04:36.114 unregister 0x200000400000 4194304 PASSED 00:04:36.114 free 0x2000009fffc0 4194304 00:04:36.114 unregister 0x200000800000 6291456 PASSED 00:04:36.114 malloc 8388608 00:04:36.114 register 0x200000400000 10485760 00:04:36.114 buf 0x2000005fffc0 len 8388608 PASSED 00:04:36.114 free 0x2000005fffc0 8388608 00:04:36.114 unregister 0x200000400000 10485760 PASSED 00:04:36.114 passed 00:04:36.114 00:04:36.114 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.114 suites 1 1 n/a 0 0 00:04:36.114 tests 1 1 1 0 0 00:04:36.114 asserts 15 15 15 0 n/a 00:04:36.114 00:04:36.114 Elapsed time = 0.047 seconds 00:04:36.376 00:04:36.376 real 0m0.217s 00:04:36.376 user 0m0.066s 00:04:36.376 sys 0m0.049s 00:04:36.376 02:49:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.376 ************************************ 00:04:36.376 END TEST env_mem_callbacks 00:04:36.376 ************************************ 00:04:36.376 02:49:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.376 00:04:36.376 real 0m6.223s 00:04:36.376 user 0m4.827s 00:04:36.376 sys 0m1.038s 00:04:36.376 02:49:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.376 02:49:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.376 ************************************ 00:04:36.376 END TEST env 00:04:36.376 ************************************ 00:04:36.376 02:49:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:36.376 02:49:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.376 02:49:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.376 02:49:07 -- common/autotest_common.sh@10 -- # set +x 00:04:36.376 ************************************ 00:04:36.376 START TEST rpc 00:04:36.376 ************************************ 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:36.376 * Looking for test storage... 00:04:36.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.376 02:49:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.376 02:49:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.376 02:49:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.376 02:49:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.376 02:49:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.376 02:49:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.376 02:49:07 rpc -- scripts/common.sh@345 -- # : 1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.376 02:49:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.376 02:49:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.376 02:49:07 rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.376 02:49:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.376 02:49:07 rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.376 02:49:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.376 02:49:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.376 02:49:07 rpc -- scripts/common.sh@368 -- # return 0 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.376 --rc genhtml_branch_coverage=1 00:04:36.376 --rc genhtml_function_coverage=1 00:04:36.376 --rc genhtml_legend=1 00:04:36.376 --rc geninfo_all_blocks=1 00:04:36.376 --rc geninfo_unexecuted_blocks=1 00:04:36.376 00:04:36.376 ' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.376 --rc genhtml_branch_coverage=1 00:04:36.376 --rc genhtml_function_coverage=1 00:04:36.376 --rc genhtml_legend=1 00:04:36.376 --rc geninfo_all_blocks=1 00:04:36.376 --rc geninfo_unexecuted_blocks=1 00:04:36.376 00:04:36.376 ' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.376 --rc genhtml_branch_coverage=1 00:04:36.376 --rc genhtml_function_coverage=1 00:04:36.376 --rc genhtml_legend=1 00:04:36.376 --rc geninfo_all_blocks=1 00:04:36.376 --rc geninfo_unexecuted_blocks=1 00:04:36.376 00:04:36.376 ' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.376 --rc genhtml_branch_coverage=1 00:04:36.376 --rc genhtml_function_coverage=1 00:04:36.376 --rc genhtml_legend=1 00:04:36.376 --rc geninfo_all_blocks=1 00:04:36.376 --rc geninfo_unexecuted_blocks=1 00:04:36.376 00:04:36.376 ' 00:04:36.376 02:49:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57134 00:04:36.376 02:49:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.376 02:49:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57134 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 57134 ']' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.376 02:49:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:36.376 02:49:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.638 [2024-12-05 02:49:07.257707] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:36.638 [2024-12-05 02:49:07.257825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57134 ] 00:04:36.638 [2024-12-05 02:49:07.415120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.900 [2024-12-05 02:49:07.511781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.900 [2024-12-05 02:49:07.511833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57134' to capture a snapshot of events at runtime. 00:04:36.900 [2024-12-05 02:49:07.511844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.900 [2024-12-05 02:49:07.511853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.900 [2024-12-05 02:49:07.511860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57134 for offline analysis/debug. 00:04:36.900 [2024-12-05 02:49:07.512743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.474 02:49:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.474 02:49:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.474 02:49:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.474 02:49:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.474 02:49:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.475 02:49:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.475 02:49:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.475 02:49:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.475 02:49:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 ************************************ 00:04:37.475 START TEST rpc_integrity 00:04:37.475 ************************************ 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.475 { 00:04:37.475 "name": "Malloc0", 00:04:37.475 "aliases": [ 00:04:37.475 "6d1c4d96-8b09-4e6c-8ee1-62222492ee61" 00:04:37.475 ], 00:04:37.475 "product_name": "Malloc disk", 00:04:37.475 "block_size": 512, 00:04:37.475 "num_blocks": 16384, 00:04:37.475 "uuid": "6d1c4d96-8b09-4e6c-8ee1-62222492ee61", 00:04:37.475 "assigned_rate_limits": { 00:04:37.475 "rw_ios_per_sec": 0, 00:04:37.475 "rw_mbytes_per_sec": 0, 00:04:37.475 "r_mbytes_per_sec": 0, 00:04:37.475 "w_mbytes_per_sec": 0 00:04:37.475 }, 00:04:37.475 "claimed": false, 00:04:37.475 "zoned": false, 00:04:37.475 "supported_io_types": { 00:04:37.475 "read": true, 00:04:37.475 "write": true, 00:04:37.475 "unmap": true, 00:04:37.475 "flush": true, 00:04:37.475 "reset": true, 00:04:37.475 "nvme_admin": false, 00:04:37.475 "nvme_io": false, 00:04:37.475 "nvme_io_md": false, 00:04:37.475 "write_zeroes": true, 00:04:37.475 "zcopy": true, 00:04:37.475 "get_zone_info": false, 00:04:37.475 "zone_management": false, 00:04:37.475 "zone_append": false, 00:04:37.475 "compare": false, 00:04:37.475 "compare_and_write": false, 00:04:37.475 "abort": true, 00:04:37.475 "seek_hole": false, 00:04:37.475 "seek_data": false, 00:04:37.475 "copy": true, 00:04:37.475 "nvme_iov_md": false 00:04:37.475 }, 00:04:37.475 "memory_domains": [ 00:04:37.475 { 00:04:37.475 "dma_device_id": "system", 00:04:37.475 "dma_device_type": 1 00:04:37.475 }, 00:04:37.475 { 00:04:37.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.475 "dma_device_type": 2 00:04:37.475 } 00:04:37.475 ], 00:04:37.475 "driver_specific": {} 00:04:37.475 } 00:04:37.475 ]' 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 [2024-12-05 02:49:08.231294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.475 [2024-12-05 02:49:08.231360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.475 [2024-12-05 02:49:08.231390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:37.475 [2024-12-05 02:49:08.231404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.475 [2024-12-05 02:49:08.233718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.475 [2024-12-05 02:49:08.233768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.475 Passthru0 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.475 { 00:04:37.475 "name": "Malloc0", 00:04:37.475 "aliases": [ 00:04:37.475 "6d1c4d96-8b09-4e6c-8ee1-62222492ee61" 00:04:37.475 ], 00:04:37.475 "product_name": "Malloc disk", 00:04:37.475 "block_size": 512, 00:04:37.475 "num_blocks": 16384, 00:04:37.475 "uuid": "6d1c4d96-8b09-4e6c-8ee1-62222492ee61", 00:04:37.475 "assigned_rate_limits": { 00:04:37.475 "rw_ios_per_sec": 0, 00:04:37.475 "rw_mbytes_per_sec": 0, 00:04:37.475 "r_mbytes_per_sec": 0, 00:04:37.475 "w_mbytes_per_sec": 0 00:04:37.475 }, 00:04:37.475 "claimed": true, 00:04:37.475 "claim_type": "exclusive_write", 00:04:37.475 "zoned": false, 00:04:37.475 "supported_io_types": { 00:04:37.475 "read": true, 00:04:37.475 "write": true, 00:04:37.475 "unmap": true, 00:04:37.475 "flush": true, 00:04:37.475 "reset": true, 00:04:37.475 "nvme_admin": false, 00:04:37.475 "nvme_io": false, 00:04:37.475 "nvme_io_md": false, 00:04:37.475 "write_zeroes": true, 00:04:37.475 "zcopy": true, 00:04:37.475 "get_zone_info": false, 00:04:37.475 "zone_management": false, 00:04:37.475 "zone_append": false, 00:04:37.475 "compare": false, 00:04:37.475 "compare_and_write": false, 00:04:37.475 "abort": true, 00:04:37.475 "seek_hole": false, 00:04:37.475 "seek_data": false, 00:04:37.475 "copy": true, 00:04:37.475 "nvme_iov_md": false 00:04:37.475 }, 00:04:37.475 "memory_domains": [ 00:04:37.475 { 00:04:37.475 "dma_device_id": "system", 00:04:37.475 "dma_device_type": 1 00:04:37.475 }, 00:04:37.475 { 00:04:37.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.475 "dma_device_type": 2 00:04:37.475 } 00:04:37.475 ], 00:04:37.475 "driver_specific": {} 00:04:37.475 }, 00:04:37.475 { 00:04:37.475 "name": "Passthru0", 00:04:37.475 "aliases": [ 00:04:37.475 "52462206-fc1d-5048-80b4-d686354baf4a" 00:04:37.475 ], 00:04:37.475 "product_name": "passthru", 00:04:37.475 "block_size": 512, 00:04:37.475 "num_blocks": 16384, 00:04:37.475 "uuid": "52462206-fc1d-5048-80b4-d686354baf4a", 00:04:37.475 "assigned_rate_limits": { 00:04:37.475 "rw_ios_per_sec": 0, 00:04:37.475 "rw_mbytes_per_sec": 0, 00:04:37.475 "r_mbytes_per_sec": 0, 00:04:37.475 "w_mbytes_per_sec": 0 00:04:37.475 }, 00:04:37.475 "claimed": false, 00:04:37.475 "zoned": false, 00:04:37.475 "supported_io_types": { 00:04:37.475 "read": true, 00:04:37.475 "write": true, 00:04:37.475 "unmap": true, 00:04:37.475 "flush": true, 00:04:37.475 "reset": true, 00:04:37.475 "nvme_admin": false, 00:04:37.475 "nvme_io": false, 00:04:37.475 "nvme_io_md": false, 00:04:37.475 "write_zeroes": true, 00:04:37.475 "zcopy": true, 00:04:37.475 "get_zone_info": false, 00:04:37.475 "zone_management": false, 00:04:37.475 "zone_append": false, 00:04:37.475 "compare": false, 00:04:37.475 "compare_and_write": false, 00:04:37.475 "abort": true, 00:04:37.475 "seek_hole": false, 00:04:37.475 "seek_data": false, 00:04:37.475 "copy": true, 00:04:37.475 "nvme_iov_md": false 00:04:37.475 }, 00:04:37.475 "memory_domains": [ 00:04:37.475 { 00:04:37.475 "dma_device_id": "system", 00:04:37.475 "dma_device_type": 1 00:04:37.475 }, 00:04:37.475 { 00:04:37.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.475 "dma_device_type": 2 00:04:37.475 } 00:04:37.475 ], 00:04:37.475 "driver_specific": { 00:04:37.475 "passthru": { 00:04:37.475 "name": "Passthru0", 00:04:37.475 "base_bdev_name": "Malloc0" 00:04:37.475 } 00:04:37.475 } 00:04:37.475 } 00:04:37.475 ]' 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.475 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.475 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.737 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.737 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.737 02:49:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.737 00:04:37.737 real 0m0.251s 00:04:37.737 user 0m0.134s 00:04:37.737 sys 0m0.030s 00:04:37.737 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 ************************************ 00:04:37.737 END TEST rpc_integrity 00:04:37.737 ************************************ 00:04:37.737 02:49:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.737 02:49:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.737 02:49:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.737 02:49:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 ************************************ 00:04:37.737 START TEST rpc_plugins 00:04:37.737 ************************************ 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.737 { 00:04:37.737 "name": "Malloc1", 00:04:37.737 "aliases": [ 00:04:37.737 "92e8d8de-6ad3-4163-afb6-ab0e9dd8ac36" 00:04:37.737 ], 00:04:37.737 "product_name": "Malloc disk", 00:04:37.737 "block_size": 4096, 00:04:37.737 "num_blocks": 256, 00:04:37.737 "uuid": "92e8d8de-6ad3-4163-afb6-ab0e9dd8ac36", 00:04:37.737 "assigned_rate_limits": { 00:04:37.737 "rw_ios_per_sec": 0, 00:04:37.737 "rw_mbytes_per_sec": 0, 00:04:37.737 "r_mbytes_per_sec": 0, 00:04:37.737 "w_mbytes_per_sec": 0 00:04:37.737 }, 00:04:37.737 "claimed": false, 00:04:37.737 "zoned": false, 00:04:37.737 "supported_io_types": { 00:04:37.737 "read": true, 00:04:37.737 "write": true, 00:04:37.737 "unmap": true, 00:04:37.737 "flush": true, 00:04:37.737 "reset": true, 00:04:37.737 "nvme_admin": false, 00:04:37.737 "nvme_io": false, 00:04:37.737 "nvme_io_md": false, 00:04:37.737 "write_zeroes": true, 00:04:37.737 "zcopy": true, 00:04:37.737 "get_zone_info": false, 00:04:37.737 "zone_management": false, 00:04:37.737 "zone_append": false, 00:04:37.737 "compare": false, 00:04:37.737 "compare_and_write": false, 00:04:37.737 "abort": true, 00:04:37.737 "seek_hole": false, 00:04:37.737 "seek_data": false, 00:04:37.737 "copy": true, 00:04:37.737 "nvme_iov_md": false 00:04:37.737 }, 00:04:37.737 "memory_domains": [ 00:04:37.737 { 00:04:37.737 "dma_device_id": "system", 00:04:37.737 "dma_device_type": 1 00:04:37.737 }, 00:04:37.737 { 00:04:37.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.737 "dma_device_type": 2 00:04:37.737 } 00:04:37.737 ], 00:04:37.737 "driver_specific": {} 00:04:37.737 } 00:04:37.737 ]' 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.737 02:49:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.737 00:04:37.737 real 0m0.110s 00:04:37.737 user 0m0.060s 00:04:37.737 sys 0m0.014s 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 ************************************ 00:04:37.737 END TEST rpc_plugins 00:04:37.737 ************************************ 00:04:37.737 02:49:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.737 02:49:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.737 02:49:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.737 02:49:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 ************************************ 00:04:37.737 START TEST rpc_trace_cmd_test 00:04:37.737 ************************************ 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.737 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.738 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57134", 00:04:37.738 "tpoint_group_mask": "0x8", 00:04:37.738 "iscsi_conn": { 00:04:37.738 "mask": "0x2", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "scsi": { 00:04:37.738 "mask": "0x4", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "bdev": { 00:04:37.738 "mask": "0x8", 00:04:37.738 "tpoint_mask": "0xffffffffffffffff" 00:04:37.738 }, 00:04:37.738 "nvmf_rdma": { 00:04:37.738 "mask": "0x10", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "nvmf_tcp": { 00:04:37.738 "mask": "0x20", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "ftl": { 00:04:37.738 "mask": "0x40", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "blobfs": { 00:04:37.738 "mask": "0x80", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "dsa": { 00:04:37.738 "mask": "0x200", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "thread": { 00:04:37.738 "mask": "0x400", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "nvme_pcie": { 00:04:37.738 "mask": "0x800", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "iaa": { 00:04:37.738 "mask": "0x1000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "nvme_tcp": { 00:04:37.738 "mask": "0x2000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "bdev_nvme": { 00:04:37.738 "mask": "0x4000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "sock": { 00:04:37.738 "mask": "0x8000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "blob": { 00:04:37.738 "mask": "0x10000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "bdev_raid": { 00:04:37.738 "mask": "0x20000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 }, 00:04:37.738 "scheduler": { 00:04:37.738 "mask": "0x40000", 00:04:37.738 "tpoint_mask": "0x0" 00:04:37.738 } 00:04:37.738 }' 00:04:37.738 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.999 00:04:37.999 real 0m0.164s 00:04:37.999 user 0m0.137s 00:04:37.999 sys 0m0.020s 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.999 02:49:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.999 ************************************ 00:04:37.999 END TEST rpc_trace_cmd_test 00:04:37.999 ************************************ 00:04:37.999 02:49:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.999 02:49:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.999 02:49:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.999 02:49:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.999 02:49:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.999 02:49:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.999 ************************************ 00:04:37.999 START TEST rpc_daemon_integrity 00:04:37.999 ************************************ 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.999 { 00:04:37.999 "name": "Malloc2", 00:04:37.999 "aliases": [ 00:04:37.999 "b794c135-0b3b-442c-8457-917fc016f1f5" 00:04:37.999 ], 00:04:37.999 "product_name": "Malloc disk", 00:04:37.999 "block_size": 512, 00:04:37.999 "num_blocks": 16384, 00:04:37.999 "uuid": "b794c135-0b3b-442c-8457-917fc016f1f5", 00:04:37.999 "assigned_rate_limits": { 00:04:37.999 "rw_ios_per_sec": 0, 00:04:37.999 "rw_mbytes_per_sec": 0, 00:04:37.999 "r_mbytes_per_sec": 0, 00:04:37.999 "w_mbytes_per_sec": 0 00:04:37.999 }, 00:04:37.999 "claimed": false, 00:04:37.999 "zoned": false, 00:04:37.999 "supported_io_types": { 00:04:37.999 "read": true, 00:04:37.999 "write": true, 00:04:37.999 "unmap": true, 00:04:37.999 "flush": true, 00:04:37.999 "reset": true, 00:04:37.999 "nvme_admin": false, 00:04:37.999 "nvme_io": false, 00:04:37.999 "nvme_io_md": false, 00:04:37.999 "write_zeroes": true, 00:04:37.999 "zcopy": true, 00:04:37.999 "get_zone_info": false, 00:04:37.999 "zone_management": false, 00:04:37.999 "zone_append": false, 00:04:37.999 "compare": false, 00:04:37.999 "compare_and_write": false, 00:04:37.999 "abort": true, 00:04:37.999 "seek_hole": false, 00:04:37.999 "seek_data": false, 00:04:37.999 "copy": true, 00:04:37.999 "nvme_iov_md": false 00:04:37.999 }, 00:04:37.999 "memory_domains": [ 00:04:37.999 { 00:04:37.999 "dma_device_id": "system", 00:04:37.999 "dma_device_type": 1 00:04:37.999 }, 00:04:37.999 { 00:04:37.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.999 "dma_device_type": 2 00:04:37.999 } 00:04:37.999 ], 00:04:37.999 "driver_specific": {} 00:04:37.999 } 00:04:37.999 ]' 00:04:37.999 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.260 [2024-12-05 02:49:08.867813] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.260 [2024-12-05 02:49:08.867879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.260 [2024-12-05 02:49:08.867900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:38.260 [2024-12-05 02:49:08.867912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.260 [2024-12-05 02:49:08.870204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.260 [2024-12-05 02:49:08.870244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.260 Passthru0 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.260 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.260 { 00:04:38.260 "name": "Malloc2", 00:04:38.260 "aliases": [ 00:04:38.260 "b794c135-0b3b-442c-8457-917fc016f1f5" 00:04:38.260 ], 00:04:38.260 "product_name": "Malloc disk", 00:04:38.260 "block_size": 512, 00:04:38.260 "num_blocks": 16384, 00:04:38.260 "uuid": "b794c135-0b3b-442c-8457-917fc016f1f5", 00:04:38.260 "assigned_rate_limits": { 00:04:38.260 "rw_ios_per_sec": 0, 00:04:38.260 "rw_mbytes_per_sec": 0, 00:04:38.260 "r_mbytes_per_sec": 0, 00:04:38.260 "w_mbytes_per_sec": 0 00:04:38.260 }, 00:04:38.260 "claimed": true, 00:04:38.260 "claim_type": "exclusive_write", 00:04:38.261 "zoned": false, 00:04:38.261 "supported_io_types": { 00:04:38.261 "read": true, 00:04:38.261 "write": true, 00:04:38.261 "unmap": true, 00:04:38.261 "flush": true, 00:04:38.261 "reset": true, 00:04:38.261 "nvme_admin": false, 00:04:38.261 "nvme_io": false, 00:04:38.261 "nvme_io_md": false, 00:04:38.261 "write_zeroes": true, 00:04:38.261 "zcopy": true, 00:04:38.261 "get_zone_info": false, 00:04:38.261 "zone_management": false, 00:04:38.261 "zone_append": false, 00:04:38.261 "compare": false, 00:04:38.261 "compare_and_write": false, 00:04:38.261 "abort": true, 00:04:38.261 "seek_hole": false, 00:04:38.261 "seek_data": false, 00:04:38.261 "copy": true, 00:04:38.261 "nvme_iov_md": false 00:04:38.261 }, 00:04:38.261 "memory_domains": [ 00:04:38.261 { 00:04:38.261 "dma_device_id": "system", 00:04:38.261 "dma_device_type": 1 00:04:38.261 }, 00:04:38.261 { 00:04:38.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.261 "dma_device_type": 2 00:04:38.261 } 00:04:38.261 ], 00:04:38.261 "driver_specific": {} 00:04:38.261 }, 00:04:38.261 { 00:04:38.261 "name": "Passthru0", 00:04:38.261 "aliases": [ 00:04:38.261 "65957f9f-f3b0-543d-807f-7b593fc9187c" 00:04:38.261 ], 00:04:38.261 "product_name": "passthru", 00:04:38.261 "block_size": 512, 00:04:38.261 "num_blocks": 16384, 00:04:38.261 "uuid": "65957f9f-f3b0-543d-807f-7b593fc9187c", 00:04:38.261 "assigned_rate_limits": { 00:04:38.261 "rw_ios_per_sec": 0, 00:04:38.261 "rw_mbytes_per_sec": 0, 00:04:38.261 "r_mbytes_per_sec": 0, 00:04:38.261 "w_mbytes_per_sec": 0 00:04:38.261 }, 00:04:38.261 "claimed": false, 00:04:38.261 "zoned": false, 00:04:38.261 "supported_io_types": { 00:04:38.261 "read": true, 00:04:38.261 "write": true, 00:04:38.261 "unmap": true, 00:04:38.261 "flush": true, 00:04:38.261 "reset": true, 00:04:38.261 "nvme_admin": false, 00:04:38.261 "nvme_io": false, 00:04:38.261 "nvme_io_md": false, 00:04:38.261 "write_zeroes": true, 00:04:38.261 "zcopy": true, 00:04:38.261 "get_zone_info": false, 00:04:38.261 "zone_management": false, 00:04:38.261 "zone_append": false, 00:04:38.261 "compare": false, 00:04:38.261 "compare_and_write": false, 00:04:38.261 "abort": true, 00:04:38.261 "seek_hole": false, 00:04:38.261 "seek_data": false, 00:04:38.261 "copy": true, 00:04:38.261 "nvme_iov_md": false 00:04:38.261 }, 00:04:38.261 "memory_domains": [ 00:04:38.261 { 00:04:38.261 "dma_device_id": "system", 00:04:38.261 "dma_device_type": 1 00:04:38.261 }, 00:04:38.261 { 00:04:38.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.261 "dma_device_type": 2 00:04:38.261 } 00:04:38.261 ], 00:04:38.261 "driver_specific": { 00:04:38.261 "passthru": { 00:04:38.261 "name": "Passthru0", 00:04:38.261 "base_bdev_name": "Malloc2" 00:04:38.261 } 00:04:38.261 } 00:04:38.261 } 00:04:38.261 ]' 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.261 00:04:38.261 real 0m0.247s 00:04:38.261 user 0m0.133s 00:04:38.261 sys 0m0.029s 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.261 02:49:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 ************************************ 00:04:38.261 END TEST rpc_daemon_integrity 00:04:38.261 ************************************ 00:04:38.261 02:49:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.261 02:49:09 rpc -- rpc/rpc.sh@84 -- # killprocess 57134 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 57134 ']' 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@958 -- # kill -0 57134 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57134 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57134' 00:04:38.261 killing process with pid 57134 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@973 -- # kill 57134 00:04:38.261 02:49:09 rpc -- common/autotest_common.sh@978 -- # wait 57134 00:04:40.210 00:04:40.210 real 0m3.620s 00:04:40.210 user 0m4.047s 00:04:40.210 sys 0m0.569s 00:04:40.210 02:49:10 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.210 ************************************ 00:04:40.210 END TEST rpc 00:04:40.210 ************************************ 00:04:40.210 02:49:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.210 02:49:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.210 02:49:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.210 02:49:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.210 02:49:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.210 ************************************ 00:04:40.210 START TEST skip_rpc 00:04:40.210 ************************************ 00:04:40.210 02:49:10 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.210 * Looking for test storage... 00:04:40.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.211 02:49:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.211 --rc genhtml_branch_coverage=1 00:04:40.211 --rc genhtml_function_coverage=1 00:04:40.211 --rc genhtml_legend=1 00:04:40.211 --rc geninfo_all_blocks=1 00:04:40.211 --rc geninfo_unexecuted_blocks=1 00:04:40.211 00:04:40.211 ' 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.211 --rc genhtml_branch_coverage=1 00:04:40.211 --rc genhtml_function_coverage=1 00:04:40.211 --rc genhtml_legend=1 00:04:40.211 --rc geninfo_all_blocks=1 00:04:40.211 --rc geninfo_unexecuted_blocks=1 00:04:40.211 00:04:40.211 ' 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.211 --rc genhtml_branch_coverage=1 00:04:40.211 --rc genhtml_function_coverage=1 00:04:40.211 --rc genhtml_legend=1 00:04:40.211 --rc geninfo_all_blocks=1 00:04:40.211 --rc geninfo_unexecuted_blocks=1 00:04:40.211 00:04:40.211 ' 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.211 --rc genhtml_branch_coverage=1 00:04:40.211 --rc genhtml_function_coverage=1 00:04:40.211 --rc genhtml_legend=1 00:04:40.211 --rc geninfo_all_blocks=1 00:04:40.211 --rc geninfo_unexecuted_blocks=1 00:04:40.211 00:04:40.211 ' 00:04:40.211 02:49:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.211 02:49:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.211 02:49:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.211 02:49:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.211 ************************************ 00:04:40.211 START TEST skip_rpc 00:04:40.211 ************************************ 00:04:40.211 02:49:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:40.211 02:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57352 00:04:40.211 02:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.211 02:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.211 02:49:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.211 [2024-12-05 02:49:10.969683] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:40.211 [2024-12-05 02:49:10.969844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57352 ] 00:04:40.470 [2024-12-05 02:49:11.133134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.470 [2024-12-05 02:49:11.283958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57352 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57352 ']' 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57352 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57352 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.761 killing process with pid 57352 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57352' 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57352 00:04:45.761 02:49:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57352 00:04:46.697 00:04:46.697 real 0m6.467s 00:04:46.697 user 0m5.870s 00:04:46.697 sys 0m0.484s 00:04:46.697 02:49:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.697 02:49:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.697 ************************************ 00:04:46.697 END TEST skip_rpc 00:04:46.697 ************************************ 00:04:46.697 02:49:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.697 02:49:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.697 02:49:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.697 02:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.697 ************************************ 00:04:46.697 START TEST skip_rpc_with_json 00:04:46.697 ************************************ 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57445 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57445 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57445 ']' 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.697 02:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.698 02:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.698 [2024-12-05 02:49:17.470333] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:46.698 [2024-12-05 02:49:17.470449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57445 ] 00:04:46.957 [2024-12-05 02:49:17.626729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.957 [2024-12-05 02:49:17.713697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.520 [2024-12-05 02:49:18.300758] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.520 request: 00:04:47.520 { 00:04:47.520 "trtype": "tcp", 00:04:47.520 "method": "nvmf_get_transports", 00:04:47.520 "req_id": 1 00:04:47.520 } 00:04:47.520 Got JSON-RPC error response 00:04:47.520 response: 00:04:47.520 { 00:04:47.520 "code": -19, 00:04:47.520 "message": "No such device" 00:04:47.520 } 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.520 [2024-12-05 02:49:18.308847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.520 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.778 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.778 02:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.778 { 00:04:47.778 "subsystems": [ 00:04:47.778 { 00:04:47.778 "subsystem": "fsdev", 00:04:47.778 "config": [ 00:04:47.778 { 00:04:47.778 "method": "fsdev_set_opts", 00:04:47.778 "params": { 00:04:47.778 "fsdev_io_pool_size": 65535, 00:04:47.778 "fsdev_io_cache_size": 256 00:04:47.778 } 00:04:47.778 } 00:04:47.778 ] 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "subsystem": "keyring", 00:04:47.778 "config": [] 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "subsystem": "iobuf", 00:04:47.778 "config": [ 00:04:47.778 { 00:04:47.778 "method": "iobuf_set_options", 00:04:47.778 "params": { 00:04:47.778 "small_pool_count": 8192, 00:04:47.778 "large_pool_count": 1024, 00:04:47.778 "small_bufsize": 8192, 00:04:47.778 "large_bufsize": 135168, 00:04:47.778 "enable_numa": false 00:04:47.778 } 00:04:47.778 } 00:04:47.778 ] 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "subsystem": "sock", 00:04:47.778 "config": [ 00:04:47.778 { 00:04:47.778 "method": "sock_set_default_impl", 00:04:47.778 "params": { 00:04:47.778 "impl_name": "posix" 00:04:47.778 } 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "method": "sock_impl_set_options", 00:04:47.778 "params": { 00:04:47.778 "impl_name": "ssl", 00:04:47.778 "recv_buf_size": 4096, 00:04:47.778 "send_buf_size": 4096, 00:04:47.778 "enable_recv_pipe": true, 00:04:47.778 "enable_quickack": false, 00:04:47.778 "enable_placement_id": 0, 00:04:47.778 "enable_zerocopy_send_server": true, 00:04:47.778 "enable_zerocopy_send_client": false, 00:04:47.778 "zerocopy_threshold": 0, 00:04:47.778 "tls_version": 0, 00:04:47.778 "enable_ktls": false 00:04:47.778 } 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "method": "sock_impl_set_options", 00:04:47.778 "params": { 00:04:47.778 "impl_name": "posix", 00:04:47.778 "recv_buf_size": 2097152, 00:04:47.778 "send_buf_size": 2097152, 00:04:47.778 "enable_recv_pipe": true, 00:04:47.778 "enable_quickack": false, 00:04:47.778 "enable_placement_id": 0, 00:04:47.778 "enable_zerocopy_send_server": true, 00:04:47.778 "enable_zerocopy_send_client": false, 00:04:47.778 "zerocopy_threshold": 0, 00:04:47.778 "tls_version": 0, 00:04:47.778 "enable_ktls": false 00:04:47.778 } 00:04:47.778 } 00:04:47.778 ] 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "subsystem": "vmd", 00:04:47.778 "config": [] 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "subsystem": "accel", 00:04:47.778 "config": [ 00:04:47.778 { 00:04:47.778 "method": "accel_set_options", 00:04:47.778 "params": { 00:04:47.778 "small_cache_size": 128, 00:04:47.778 "large_cache_size": 16, 00:04:47.778 "task_count": 2048, 00:04:47.778 "sequence_count": 2048, 00:04:47.778 "buf_count": 2048 00:04:47.778 } 00:04:47.778 } 00:04:47.778 ] 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "subsystem": "bdev", 00:04:47.778 "config": [ 00:04:47.778 { 00:04:47.778 "method": "bdev_set_options", 00:04:47.778 "params": { 00:04:47.778 "bdev_io_pool_size": 65535, 00:04:47.778 "bdev_io_cache_size": 256, 00:04:47.778 "bdev_auto_examine": true, 00:04:47.778 "iobuf_small_cache_size": 128, 00:04:47.778 "iobuf_large_cache_size": 16 00:04:47.778 } 00:04:47.778 }, 00:04:47.778 { 00:04:47.778 "method": "bdev_raid_set_options", 00:04:47.779 "params": { 00:04:47.779 "process_window_size_kb": 1024, 00:04:47.779 "process_max_bandwidth_mb_sec": 0 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "bdev_iscsi_set_options", 00:04:47.779 "params": { 00:04:47.779 "timeout_sec": 30 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "bdev_nvme_set_options", 00:04:47.779 "params": { 00:04:47.779 "action_on_timeout": "none", 00:04:47.779 "timeout_us": 0, 00:04:47.779 "timeout_admin_us": 0, 00:04:47.779 "keep_alive_timeout_ms": 10000, 00:04:47.779 "arbitration_burst": 0, 00:04:47.779 "low_priority_weight": 0, 00:04:47.779 "medium_priority_weight": 0, 00:04:47.779 "high_priority_weight": 0, 00:04:47.779 "nvme_adminq_poll_period_us": 10000, 00:04:47.779 "nvme_ioq_poll_period_us": 0, 00:04:47.779 "io_queue_requests": 0, 00:04:47.779 "delay_cmd_submit": true, 00:04:47.779 "transport_retry_count": 4, 00:04:47.779 "bdev_retry_count": 3, 00:04:47.779 "transport_ack_timeout": 0, 00:04:47.779 "ctrlr_loss_timeout_sec": 0, 00:04:47.779 "reconnect_delay_sec": 0, 00:04:47.779 "fast_io_fail_timeout_sec": 0, 00:04:47.779 "disable_auto_failback": false, 00:04:47.779 "generate_uuids": false, 00:04:47.779 "transport_tos": 0, 00:04:47.779 "nvme_error_stat": false, 00:04:47.779 "rdma_srq_size": 0, 00:04:47.779 "io_path_stat": false, 00:04:47.779 "allow_accel_sequence": false, 00:04:47.779 "rdma_max_cq_size": 0, 00:04:47.779 "rdma_cm_event_timeout_ms": 0, 00:04:47.779 "dhchap_digests": [ 00:04:47.779 "sha256", 00:04:47.779 "sha384", 00:04:47.779 "sha512" 00:04:47.779 ], 00:04:47.779 "dhchap_dhgroups": [ 00:04:47.779 "null", 00:04:47.779 "ffdhe2048", 00:04:47.779 "ffdhe3072", 00:04:47.779 "ffdhe4096", 00:04:47.779 "ffdhe6144", 00:04:47.779 "ffdhe8192" 00:04:47.779 ] 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "bdev_nvme_set_hotplug", 00:04:47.779 "params": { 00:04:47.779 "period_us": 100000, 00:04:47.779 "enable": false 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "bdev_wait_for_examine" 00:04:47.779 } 00:04:47.779 ] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "scsi", 00:04:47.779 "config": null 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "scheduler", 00:04:47.779 "config": [ 00:04:47.779 { 00:04:47.779 "method": "framework_set_scheduler", 00:04:47.779 "params": { 00:04:47.779 "name": "static" 00:04:47.779 } 00:04:47.779 } 00:04:47.779 ] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "vhost_scsi", 00:04:47.779 "config": [] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "vhost_blk", 00:04:47.779 "config": [] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "ublk", 00:04:47.779 "config": [] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "nbd", 00:04:47.779 "config": [] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "nvmf", 00:04:47.779 "config": [ 00:04:47.779 { 00:04:47.779 "method": "nvmf_set_config", 00:04:47.779 "params": { 00:04:47.779 "discovery_filter": "match_any", 00:04:47.779 "admin_cmd_passthru": { 00:04:47.779 "identify_ctrlr": false 00:04:47.779 }, 00:04:47.779 "dhchap_digests": [ 00:04:47.779 "sha256", 00:04:47.779 "sha384", 00:04:47.779 "sha512" 00:04:47.779 ], 00:04:47.779 "dhchap_dhgroups": [ 00:04:47.779 "null", 00:04:47.779 "ffdhe2048", 00:04:47.779 "ffdhe3072", 00:04:47.779 "ffdhe4096", 00:04:47.779 "ffdhe6144", 00:04:47.779 "ffdhe8192" 00:04:47.779 ] 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "nvmf_set_max_subsystems", 00:04:47.779 "params": { 00:04:47.779 "max_subsystems": 1024 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "nvmf_set_crdt", 00:04:47.779 "params": { 00:04:47.779 "crdt1": 0, 00:04:47.779 "crdt2": 0, 00:04:47.779 "crdt3": 0 00:04:47.779 } 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "method": "nvmf_create_transport", 00:04:47.779 "params": { 00:04:47.779 "trtype": "TCP", 00:04:47.779 "max_queue_depth": 128, 00:04:47.779 "max_io_qpairs_per_ctrlr": 127, 00:04:47.779 "in_capsule_data_size": 4096, 00:04:47.779 "max_io_size": 131072, 00:04:47.779 "io_unit_size": 131072, 00:04:47.779 "max_aq_depth": 128, 00:04:47.779 "num_shared_buffers": 511, 00:04:47.779 "buf_cache_size": 4294967295, 00:04:47.779 "dif_insert_or_strip": false, 00:04:47.779 "zcopy": false, 00:04:47.779 "c2h_success": true, 00:04:47.779 "sock_priority": 0, 00:04:47.779 "abort_timeout_sec": 1, 00:04:47.779 "ack_timeout": 0, 00:04:47.779 "data_wr_pool_size": 0 00:04:47.779 } 00:04:47.779 } 00:04:47.779 ] 00:04:47.779 }, 00:04:47.779 { 00:04:47.779 "subsystem": "iscsi", 00:04:47.779 "config": [ 00:04:47.779 { 00:04:47.779 "method": "iscsi_set_options", 00:04:47.779 "params": { 00:04:47.779 "node_base": "iqn.2016-06.io.spdk", 00:04:47.779 "max_sessions": 128, 00:04:47.779 "max_connections_per_session": 2, 00:04:47.779 "max_queue_depth": 64, 00:04:47.779 "default_time2wait": 2, 00:04:47.779 "default_time2retain": 20, 00:04:47.779 "first_burst_length": 8192, 00:04:47.779 "immediate_data": true, 00:04:47.779 "allow_duplicated_isid": false, 00:04:47.779 "error_recovery_level": 0, 00:04:47.779 "nop_timeout": 60, 00:04:47.779 "nop_in_interval": 30, 00:04:47.779 "disable_chap": false, 00:04:47.779 "require_chap": false, 00:04:47.779 "mutual_chap": false, 00:04:47.779 "chap_group": 0, 00:04:47.779 "max_large_datain_per_connection": 64, 00:04:47.779 "max_r2t_per_connection": 4, 00:04:47.779 "pdu_pool_size": 36864, 00:04:47.779 "immediate_data_pool_size": 16384, 00:04:47.779 "data_out_pool_size": 2048 00:04:47.779 } 00:04:47.779 } 00:04:47.779 ] 00:04:47.779 } 00:04:47.779 ] 00:04:47.779 } 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57445 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57445 ']' 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57445 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57445 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.779 killing process with pid 57445 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57445' 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57445 00:04:47.779 02:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57445 00:04:49.202 02:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57485 00:04:49.202 02:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.202 02:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57485 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57485 ']' 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57485 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57485 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.490 killing process with pid 57485 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57485' 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57485 00:04:54.490 02:49:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57485 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:55.426 00:04:55.426 real 0m8.659s 00:04:55.426 user 0m8.161s 00:04:55.426 sys 0m0.713s 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.426 ************************************ 00:04:55.426 END TEST skip_rpc_with_json 00:04:55.426 ************************************ 00:04:55.426 02:49:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:55.426 02:49:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.426 02:49:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.426 02:49:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.426 ************************************ 00:04:55.426 START TEST skip_rpc_with_delay 00:04:55.426 ************************************ 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.426 [2024-12-05 02:49:26.169699] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.426 ************************************ 00:04:55.426 END TEST skip_rpc_with_delay 00:04:55.426 ************************************ 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.426 00:04:55.426 real 0m0.132s 00:04:55.426 user 0m0.066s 00:04:55.426 sys 0m0.062s 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.426 02:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:55.426 02:49:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:55.426 02:49:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:55.426 02:49:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:55.426 02:49:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.426 02:49:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.426 02:49:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.426 ************************************ 00:04:55.426 START TEST exit_on_failed_rpc_init 00:04:55.426 ************************************ 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57607 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57607 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57607 ']' 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.426 02:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.686 [2024-12-05 02:49:26.366923] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:55.686 [2024-12-05 02:49:26.367115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57607 ] 00:04:55.944 [2024-12-05 02:49:26.540894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.944 [2024-12-05 02:49:26.627611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.511 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.511 [2024-12-05 02:49:27.303743] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:56.512 [2024-12-05 02:49:27.303857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:04:56.770 [2024-12-05 02:49:27.463754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.770 [2024-12-05 02:49:27.563663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.770 [2024-12-05 02:49:27.563746] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.770 [2024-12-05 02:49:27.563759] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.770 [2024-12-05 02:49:27.563772] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57607 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57607 ']' 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57607 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57607 00:04:57.028 killing process with pid 57607 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57607' 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57607 00:04:57.028 02:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57607 00:04:58.403 00:04:58.403 real 0m2.771s 00:04:58.403 user 0m3.052s 00:04:58.403 sys 0m0.482s 00:04:58.403 02:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.403 ************************************ 00:04:58.403 END TEST exit_on_failed_rpc_init 00:04:58.403 ************************************ 00:04:58.403 02:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 02:49:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.403 ************************************ 00:04:58.403 END TEST skip_rpc 00:04:58.403 ************************************ 00:04:58.403 00:04:58.403 real 0m18.356s 00:04:58.403 user 0m17.298s 00:04:58.403 sys 0m1.915s 00:04:58.403 02:49:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.403 02:49:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.403 02:49:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:58.403 02:49:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.404 02:49:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.404 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.404 ************************************ 00:04:58.404 START TEST rpc_client 00:04:58.404 ************************************ 00:04:58.404 02:49:29 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:58.404 * Looking for test storage... 00:04:58.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:58.404 02:49:29 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.404 02:49:29 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.404 02:49:29 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.404 02:49:29 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.404 02:49:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:58.663 02:49:29 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.663 02:49:29 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.663 --rc genhtml_branch_coverage=1 00:04:58.663 --rc genhtml_function_coverage=1 00:04:58.663 --rc genhtml_legend=1 00:04:58.663 --rc geninfo_all_blocks=1 00:04:58.663 --rc geninfo_unexecuted_blocks=1 00:04:58.663 00:04:58.663 ' 00:04:58.663 02:49:29 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.663 --rc genhtml_branch_coverage=1 00:04:58.663 --rc genhtml_function_coverage=1 00:04:58.663 --rc genhtml_legend=1 00:04:58.663 --rc geninfo_all_blocks=1 00:04:58.663 --rc geninfo_unexecuted_blocks=1 00:04:58.663 00:04:58.663 ' 00:04:58.663 02:49:29 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.663 --rc genhtml_branch_coverage=1 00:04:58.663 --rc genhtml_function_coverage=1 00:04:58.663 --rc genhtml_legend=1 00:04:58.663 --rc geninfo_all_blocks=1 00:04:58.663 --rc geninfo_unexecuted_blocks=1 00:04:58.663 00:04:58.663 ' 00:04:58.663 02:49:29 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.663 --rc genhtml_branch_coverage=1 00:04:58.663 --rc genhtml_function_coverage=1 00:04:58.663 --rc genhtml_legend=1 00:04:58.663 --rc geninfo_all_blocks=1 00:04:58.664 --rc geninfo_unexecuted_blocks=1 00:04:58.664 00:04:58.664 ' 00:04:58.664 02:49:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:58.664 OK 00:04:58.664 02:49:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.664 00:04:58.664 real 0m0.181s 00:04:58.664 user 0m0.105s 00:04:58.664 sys 0m0.079s 00:04:58.664 02:49:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.664 ************************************ 00:04:58.664 END TEST rpc_client 00:04:58.664 ************************************ 00:04:58.664 02:49:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.664 02:49:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:58.664 02:49:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.664 02:49:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.664 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.664 ************************************ 00:04:58.664 START TEST json_config 00:04:58.664 ************************************ 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.664 02:49:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.664 02:49:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.664 02:49:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.664 02:49:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.664 02:49:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.664 02:49:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.664 02:49:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.664 02:49:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.664 02:49:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.664 02:49:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.664 02:49:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.664 02:49:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.664 02:49:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.664 02:49:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.664 02:49:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.664 --rc genhtml_branch_coverage=1 00:04:58.664 --rc genhtml_function_coverage=1 00:04:58.664 --rc genhtml_legend=1 00:04:58.664 --rc geninfo_all_blocks=1 00:04:58.664 --rc geninfo_unexecuted_blocks=1 00:04:58.664 00:04:58.664 ' 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.664 --rc genhtml_branch_coverage=1 00:04:58.664 --rc genhtml_function_coverage=1 00:04:58.664 --rc genhtml_legend=1 00:04:58.664 --rc geninfo_all_blocks=1 00:04:58.664 --rc geninfo_unexecuted_blocks=1 00:04:58.664 00:04:58.664 ' 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.664 --rc genhtml_branch_coverage=1 00:04:58.664 --rc genhtml_function_coverage=1 00:04:58.664 --rc genhtml_legend=1 00:04:58.664 --rc geninfo_all_blocks=1 00:04:58.664 --rc geninfo_unexecuted_blocks=1 00:04:58.664 00:04:58.664 ' 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.664 --rc genhtml_branch_coverage=1 00:04:58.664 --rc genhtml_function_coverage=1 00:04:58.664 --rc genhtml_legend=1 00:04:58.664 --rc geninfo_all_blocks=1 00:04:58.664 --rc geninfo_unexecuted_blocks=1 00:04:58.664 00:04:58.664 ' 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:45146dea-42da-4764-9336-d85a2ddead66 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=45146dea-42da-4764-9336-d85a2ddead66 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.664 02:49:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.664 02:49:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.664 02:49:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.664 02:49:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.664 02:49:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.664 02:49:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.664 02:49:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.664 02:49:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.664 02:49:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.664 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.664 02:49:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:58.664 WARNING: No tests are enabled so not running JSON configuration tests 00:04:58.664 02:49:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:58.664 00:04:58.664 real 0m0.144s 00:04:58.664 user 0m0.086s 00:04:58.664 sys 0m0.057s 00:04:58.664 02:49:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.665 02:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.665 ************************************ 00:04:58.665 END TEST json_config 00:04:58.665 ************************************ 00:04:58.926 02:49:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:58.926 02:49:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.926 02:49:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.926 02:49:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.926 ************************************ 00:04:58.926 START TEST json_config_extra_key 00:04:58.926 ************************************ 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.926 ' 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.926 ' 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.926 ' 00:04:58.926 02:49:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.926 ' 00:04:58.926 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:45146dea-42da-4764-9336-d85a2ddead66 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=45146dea-42da-4764-9336-d85a2ddead66 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.926 02:49:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.926 02:49:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.926 02:49:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.926 02:49:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.927 02:49:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.927 02:49:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.927 02:49:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.927 02:49:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.927 INFO: launching applications... 00:04:58.927 02:49:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57813 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.927 Waiting for target to run... 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57813 /var/tmp/spdk_tgt.sock 00:04:58.927 02:49:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57813 ']' 00:04:58.927 02:49:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.927 02:49:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.927 02:49:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.927 02:49:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.927 02:49:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.927 02:49:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.927 [2024-12-05 02:49:29.728503] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:58.927 [2024-12-05 02:49:29.728747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57813 ] 00:04:59.494 [2024-12-05 02:49:30.035985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.494 [2024-12-05 02:49:30.130039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.752 02:49:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.752 00:04:59.752 INFO: shutting down applications... 00:04:59.752 02:49:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.752 02:49:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.752 02:49:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57813 ]] 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57813 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57813 00:04:59.752 02:49:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.347 02:49:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.347 02:49:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.347 02:49:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57813 00:05:00.347 02:49:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.937 02:49:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.937 02:49:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.938 02:49:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57813 00:05:00.938 02:49:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57813 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.507 SPDK target shutdown done 00:05:01.507 02:49:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.507 Success 00:05:01.507 02:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:01.507 ************************************ 00:05:01.507 END TEST json_config_extra_key 00:05:01.507 ************************************ 00:05:01.507 00:05:01.507 real 0m2.552s 00:05:01.507 user 0m2.361s 00:05:01.507 sys 0m0.393s 00:05:01.507 02:49:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.507 02:49:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.507 02:49:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.507 02:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.507 02:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.507 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:05:01.507 ************************************ 00:05:01.507 START TEST alias_rpc 00:05:01.507 ************************************ 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.507 * Looking for test storage... 00:05:01.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.507 02:49:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.507 --rc genhtml_branch_coverage=1 00:05:01.507 --rc genhtml_function_coverage=1 00:05:01.507 --rc genhtml_legend=1 00:05:01.507 --rc geninfo_all_blocks=1 00:05:01.507 --rc geninfo_unexecuted_blocks=1 00:05:01.507 00:05:01.507 ' 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.507 --rc genhtml_branch_coverage=1 00:05:01.507 --rc genhtml_function_coverage=1 00:05:01.507 --rc genhtml_legend=1 00:05:01.507 --rc geninfo_all_blocks=1 00:05:01.507 --rc geninfo_unexecuted_blocks=1 00:05:01.507 00:05:01.507 ' 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.507 --rc genhtml_branch_coverage=1 00:05:01.507 --rc genhtml_function_coverage=1 00:05:01.507 --rc genhtml_legend=1 00:05:01.507 --rc geninfo_all_blocks=1 00:05:01.507 --rc geninfo_unexecuted_blocks=1 00:05:01.507 00:05:01.507 ' 00:05:01.507 02:49:32 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.507 --rc genhtml_branch_coverage=1 00:05:01.507 --rc genhtml_function_coverage=1 00:05:01.507 --rc genhtml_legend=1 00:05:01.507 --rc geninfo_all_blocks=1 00:05:01.507 --rc geninfo_unexecuted_blocks=1 00:05:01.507 00:05:01.507 ' 00:05:01.507 02:49:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.508 02:49:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57901 00:05:01.508 02:49:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57901 00:05:01.508 02:49:32 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57901 ']' 00:05:01.508 02:49:32 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.508 02:49:32 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.508 02:49:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.508 02:49:32 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.508 02:49:32 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.508 02:49:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.508 [2024-12-05 02:49:32.331036] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:01.508 [2024-12-05 02:49:32.331171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57901 ] 00:05:01.766 [2024-12-05 02:49:32.483752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.766 [2024-12-05 02:49:32.586864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.336 02:49:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.336 02:49:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.336 02:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:02.594 02:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57901 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57901 ']' 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57901 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57901 00:05:02.594 killing process with pid 57901 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.594 02:49:33 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57901' 00:05:02.595 02:49:33 alias_rpc -- common/autotest_common.sh@973 -- # kill 57901 00:05:02.595 02:49:33 alias_rpc -- common/autotest_common.sh@978 -- # wait 57901 00:05:03.971 ************************************ 00:05:03.971 END TEST alias_rpc 00:05:03.971 ************************************ 00:05:03.971 00:05:03.971 real 0m2.434s 00:05:03.971 user 0m2.473s 00:05:03.971 sys 0m0.422s 00:05:03.971 02:49:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.971 02:49:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.971 02:49:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:03.971 02:49:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:03.971 02:49:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.971 02:49:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.971 02:49:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.971 ************************************ 00:05:03.971 START TEST spdkcli_tcp 00:05:03.971 ************************************ 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:03.971 * Looking for test storage... 00:05:03.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.971 02:49:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.971 --rc genhtml_branch_coverage=1 00:05:03.971 --rc genhtml_function_coverage=1 00:05:03.971 --rc genhtml_legend=1 00:05:03.971 --rc geninfo_all_blocks=1 00:05:03.971 --rc geninfo_unexecuted_blocks=1 00:05:03.971 00:05:03.971 ' 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.971 --rc genhtml_branch_coverage=1 00:05:03.971 --rc genhtml_function_coverage=1 00:05:03.971 --rc genhtml_legend=1 00:05:03.971 --rc geninfo_all_blocks=1 00:05:03.971 --rc geninfo_unexecuted_blocks=1 00:05:03.971 00:05:03.971 ' 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.971 --rc genhtml_branch_coverage=1 00:05:03.971 --rc genhtml_function_coverage=1 00:05:03.971 --rc genhtml_legend=1 00:05:03.971 --rc geninfo_all_blocks=1 00:05:03.971 --rc geninfo_unexecuted_blocks=1 00:05:03.971 00:05:03.971 ' 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.971 --rc genhtml_branch_coverage=1 00:05:03.971 --rc genhtml_function_coverage=1 00:05:03.971 --rc genhtml_legend=1 00:05:03.971 --rc geninfo_all_blocks=1 00:05:03.971 --rc geninfo_unexecuted_blocks=1 00:05:03.971 00:05:03.971 ' 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57990 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57990 00:05:03.971 02:49:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57990 ']' 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.971 02:49:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.972 02:49:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.972 02:49:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.972 02:49:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.232 [2024-12-05 02:49:34.832128] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:04.232 [2024-12-05 02:49:34.832238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57990 ] 00:05:04.232 [2024-12-05 02:49:34.989223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.232 [2024-12-05 02:49:35.070438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.232 [2024-12-05 02:49:35.070582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.165 02:49:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.166 02:49:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:05.166 02:49:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58007 00:05:05.166 02:49:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:05.166 02:49:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.166 [ 00:05:05.166 "bdev_malloc_delete", 00:05:05.166 "bdev_malloc_create", 00:05:05.166 "bdev_null_resize", 00:05:05.166 "bdev_null_delete", 00:05:05.166 "bdev_null_create", 00:05:05.166 "bdev_nvme_cuse_unregister", 00:05:05.166 "bdev_nvme_cuse_register", 00:05:05.166 "bdev_opal_new_user", 00:05:05.166 "bdev_opal_set_lock_state", 00:05:05.166 "bdev_opal_delete", 00:05:05.166 "bdev_opal_get_info", 00:05:05.166 "bdev_opal_create", 00:05:05.166 "bdev_nvme_opal_revert", 00:05:05.166 "bdev_nvme_opal_init", 00:05:05.166 "bdev_nvme_send_cmd", 00:05:05.166 "bdev_nvme_set_keys", 00:05:05.166 "bdev_nvme_get_path_iostat", 00:05:05.166 "bdev_nvme_get_mdns_discovery_info", 00:05:05.166 "bdev_nvme_stop_mdns_discovery", 00:05:05.166 "bdev_nvme_start_mdns_discovery", 00:05:05.166 "bdev_nvme_set_multipath_policy", 00:05:05.166 "bdev_nvme_set_preferred_path", 00:05:05.166 "bdev_nvme_get_io_paths", 00:05:05.166 "bdev_nvme_remove_error_injection", 00:05:05.166 "bdev_nvme_add_error_injection", 00:05:05.166 "bdev_nvme_get_discovery_info", 00:05:05.166 "bdev_nvme_stop_discovery", 00:05:05.166 "bdev_nvme_start_discovery", 00:05:05.166 "bdev_nvme_get_controller_health_info", 00:05:05.166 "bdev_nvme_disable_controller", 00:05:05.166 "bdev_nvme_enable_controller", 00:05:05.166 "bdev_nvme_reset_controller", 00:05:05.166 "bdev_nvme_get_transport_statistics", 00:05:05.166 "bdev_nvme_apply_firmware", 00:05:05.166 "bdev_nvme_detach_controller", 00:05:05.166 "bdev_nvme_get_controllers", 00:05:05.166 "bdev_nvme_attach_controller", 00:05:05.166 "bdev_nvme_set_hotplug", 00:05:05.166 "bdev_nvme_set_options", 00:05:05.166 "bdev_passthru_delete", 00:05:05.166 "bdev_passthru_create", 00:05:05.166 "bdev_lvol_set_parent_bdev", 00:05:05.166 "bdev_lvol_set_parent", 00:05:05.166 "bdev_lvol_check_shallow_copy", 00:05:05.166 "bdev_lvol_start_shallow_copy", 00:05:05.166 "bdev_lvol_grow_lvstore", 00:05:05.166 "bdev_lvol_get_lvols", 00:05:05.166 "bdev_lvol_get_lvstores", 00:05:05.166 "bdev_lvol_delete", 00:05:05.166 "bdev_lvol_set_read_only", 00:05:05.166 "bdev_lvol_resize", 00:05:05.166 "bdev_lvol_decouple_parent", 00:05:05.166 "bdev_lvol_inflate", 00:05:05.166 "bdev_lvol_rename", 00:05:05.166 "bdev_lvol_clone_bdev", 00:05:05.166 "bdev_lvol_clone", 00:05:05.166 "bdev_lvol_snapshot", 00:05:05.166 "bdev_lvol_create", 00:05:05.166 "bdev_lvol_delete_lvstore", 00:05:05.166 "bdev_lvol_rename_lvstore", 00:05:05.166 "bdev_lvol_create_lvstore", 00:05:05.166 "bdev_raid_set_options", 00:05:05.166 "bdev_raid_remove_base_bdev", 00:05:05.166 "bdev_raid_add_base_bdev", 00:05:05.166 "bdev_raid_delete", 00:05:05.166 "bdev_raid_create", 00:05:05.166 "bdev_raid_get_bdevs", 00:05:05.166 "bdev_error_inject_error", 00:05:05.166 "bdev_error_delete", 00:05:05.166 "bdev_error_create", 00:05:05.166 "bdev_split_delete", 00:05:05.166 "bdev_split_create", 00:05:05.166 "bdev_delay_delete", 00:05:05.166 "bdev_delay_create", 00:05:05.166 "bdev_delay_update_latency", 00:05:05.166 "bdev_zone_block_delete", 00:05:05.166 "bdev_zone_block_create", 00:05:05.166 "blobfs_create", 00:05:05.166 "blobfs_detect", 00:05:05.166 "blobfs_set_cache_size", 00:05:05.166 "bdev_xnvme_delete", 00:05:05.166 "bdev_xnvme_create", 00:05:05.166 "bdev_aio_delete", 00:05:05.166 "bdev_aio_rescan", 00:05:05.166 "bdev_aio_create", 00:05:05.166 "bdev_ftl_set_property", 00:05:05.166 "bdev_ftl_get_properties", 00:05:05.166 "bdev_ftl_get_stats", 00:05:05.166 "bdev_ftl_unmap", 00:05:05.166 "bdev_ftl_unload", 00:05:05.166 "bdev_ftl_delete", 00:05:05.166 "bdev_ftl_load", 00:05:05.166 "bdev_ftl_create", 00:05:05.166 "bdev_virtio_attach_controller", 00:05:05.166 "bdev_virtio_scsi_get_devices", 00:05:05.166 "bdev_virtio_detach_controller", 00:05:05.166 "bdev_virtio_blk_set_hotplug", 00:05:05.166 "bdev_iscsi_delete", 00:05:05.166 "bdev_iscsi_create", 00:05:05.166 "bdev_iscsi_set_options", 00:05:05.166 "accel_error_inject_error", 00:05:05.166 "ioat_scan_accel_module", 00:05:05.166 "dsa_scan_accel_module", 00:05:05.166 "iaa_scan_accel_module", 00:05:05.166 "keyring_file_remove_key", 00:05:05.166 "keyring_file_add_key", 00:05:05.166 "keyring_linux_set_options", 00:05:05.166 "fsdev_aio_delete", 00:05:05.166 "fsdev_aio_create", 00:05:05.166 "iscsi_get_histogram", 00:05:05.166 "iscsi_enable_histogram", 00:05:05.166 "iscsi_set_options", 00:05:05.166 "iscsi_get_auth_groups", 00:05:05.166 "iscsi_auth_group_remove_secret", 00:05:05.166 "iscsi_auth_group_add_secret", 00:05:05.166 "iscsi_delete_auth_group", 00:05:05.166 "iscsi_create_auth_group", 00:05:05.166 "iscsi_set_discovery_auth", 00:05:05.166 "iscsi_get_options", 00:05:05.166 "iscsi_target_node_request_logout", 00:05:05.166 "iscsi_target_node_set_redirect", 00:05:05.166 "iscsi_target_node_set_auth", 00:05:05.166 "iscsi_target_node_add_lun", 00:05:05.166 "iscsi_get_stats", 00:05:05.166 "iscsi_get_connections", 00:05:05.166 "iscsi_portal_group_set_auth", 00:05:05.166 "iscsi_start_portal_group", 00:05:05.166 "iscsi_delete_portal_group", 00:05:05.166 "iscsi_create_portal_group", 00:05:05.166 "iscsi_get_portal_groups", 00:05:05.166 "iscsi_delete_target_node", 00:05:05.166 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.166 "iscsi_target_node_add_pg_ig_maps", 00:05:05.166 "iscsi_create_target_node", 00:05:05.166 "iscsi_get_target_nodes", 00:05:05.166 "iscsi_delete_initiator_group", 00:05:05.166 "iscsi_initiator_group_remove_initiators", 00:05:05.166 "iscsi_initiator_group_add_initiators", 00:05:05.166 "iscsi_create_initiator_group", 00:05:05.166 "iscsi_get_initiator_groups", 00:05:05.166 "nvmf_set_crdt", 00:05:05.166 "nvmf_set_config", 00:05:05.166 "nvmf_set_max_subsystems", 00:05:05.166 "nvmf_stop_mdns_prr", 00:05:05.166 "nvmf_publish_mdns_prr", 00:05:05.166 "nvmf_subsystem_get_listeners", 00:05:05.166 "nvmf_subsystem_get_qpairs", 00:05:05.166 "nvmf_subsystem_get_controllers", 00:05:05.166 "nvmf_get_stats", 00:05:05.166 "nvmf_get_transports", 00:05:05.166 "nvmf_create_transport", 00:05:05.166 "nvmf_get_targets", 00:05:05.166 "nvmf_delete_target", 00:05:05.166 "nvmf_create_target", 00:05:05.166 "nvmf_subsystem_allow_any_host", 00:05:05.166 "nvmf_subsystem_set_keys", 00:05:05.166 "nvmf_subsystem_remove_host", 00:05:05.166 "nvmf_subsystem_add_host", 00:05:05.166 "nvmf_ns_remove_host", 00:05:05.166 "nvmf_ns_add_host", 00:05:05.166 "nvmf_subsystem_remove_ns", 00:05:05.166 "nvmf_subsystem_set_ns_ana_group", 00:05:05.166 "nvmf_subsystem_add_ns", 00:05:05.166 "nvmf_subsystem_listener_set_ana_state", 00:05:05.166 "nvmf_discovery_get_referrals", 00:05:05.166 "nvmf_discovery_remove_referral", 00:05:05.166 "nvmf_discovery_add_referral", 00:05:05.166 "nvmf_subsystem_remove_listener", 00:05:05.166 "nvmf_subsystem_add_listener", 00:05:05.166 "nvmf_delete_subsystem", 00:05:05.166 "nvmf_create_subsystem", 00:05:05.166 "nvmf_get_subsystems", 00:05:05.166 "env_dpdk_get_mem_stats", 00:05:05.166 "nbd_get_disks", 00:05:05.166 "nbd_stop_disk", 00:05:05.166 "nbd_start_disk", 00:05:05.166 "ublk_recover_disk", 00:05:05.166 "ublk_get_disks", 00:05:05.166 "ublk_stop_disk", 00:05:05.166 "ublk_start_disk", 00:05:05.166 "ublk_destroy_target", 00:05:05.166 "ublk_create_target", 00:05:05.166 "virtio_blk_create_transport", 00:05:05.166 "virtio_blk_get_transports", 00:05:05.166 "vhost_controller_set_coalescing", 00:05:05.166 "vhost_get_controllers", 00:05:05.166 "vhost_delete_controller", 00:05:05.166 "vhost_create_blk_controller", 00:05:05.166 "vhost_scsi_controller_remove_target", 00:05:05.166 "vhost_scsi_controller_add_target", 00:05:05.166 "vhost_start_scsi_controller", 00:05:05.166 "vhost_create_scsi_controller", 00:05:05.166 "thread_set_cpumask", 00:05:05.166 "scheduler_set_options", 00:05:05.166 "framework_get_governor", 00:05:05.166 "framework_get_scheduler", 00:05:05.166 "framework_set_scheduler", 00:05:05.166 "framework_get_reactors", 00:05:05.166 "thread_get_io_channels", 00:05:05.166 "thread_get_pollers", 00:05:05.166 "thread_get_stats", 00:05:05.166 "framework_monitor_context_switch", 00:05:05.166 "spdk_kill_instance", 00:05:05.166 "log_enable_timestamps", 00:05:05.166 "log_get_flags", 00:05:05.166 "log_clear_flag", 00:05:05.166 "log_set_flag", 00:05:05.166 "log_get_level", 00:05:05.166 "log_set_level", 00:05:05.166 "log_get_print_level", 00:05:05.166 "log_set_print_level", 00:05:05.166 "framework_enable_cpumask_locks", 00:05:05.166 "framework_disable_cpumask_locks", 00:05:05.166 "framework_wait_init", 00:05:05.166 "framework_start_init", 00:05:05.166 "scsi_get_devices", 00:05:05.166 "bdev_get_histogram", 00:05:05.166 "bdev_enable_histogram", 00:05:05.166 "bdev_set_qos_limit", 00:05:05.166 "bdev_set_qd_sampling_period", 00:05:05.166 "bdev_get_bdevs", 00:05:05.166 "bdev_reset_iostat", 00:05:05.166 "bdev_get_iostat", 00:05:05.166 "bdev_examine", 00:05:05.166 "bdev_wait_for_examine", 00:05:05.166 "bdev_set_options", 00:05:05.166 "accel_get_stats", 00:05:05.166 "accel_set_options", 00:05:05.167 "accel_set_driver", 00:05:05.167 "accel_crypto_key_destroy", 00:05:05.167 "accel_crypto_keys_get", 00:05:05.167 "accel_crypto_key_create", 00:05:05.167 "accel_assign_opc", 00:05:05.167 "accel_get_module_info", 00:05:05.167 "accel_get_opc_assignments", 00:05:05.167 "vmd_rescan", 00:05:05.167 "vmd_remove_device", 00:05:05.167 "vmd_enable", 00:05:05.167 "sock_get_default_impl", 00:05:05.167 "sock_set_default_impl", 00:05:05.167 "sock_impl_set_options", 00:05:05.167 "sock_impl_get_options", 00:05:05.167 "iobuf_get_stats", 00:05:05.167 "iobuf_set_options", 00:05:05.167 "keyring_get_keys", 00:05:05.167 "framework_get_pci_devices", 00:05:05.167 "framework_get_config", 00:05:05.167 "framework_get_subsystems", 00:05:05.167 "fsdev_set_opts", 00:05:05.167 "fsdev_get_opts", 00:05:05.167 "trace_get_info", 00:05:05.167 "trace_get_tpoint_group_mask", 00:05:05.167 "trace_disable_tpoint_group", 00:05:05.167 "trace_enable_tpoint_group", 00:05:05.167 "trace_clear_tpoint_mask", 00:05:05.167 "trace_set_tpoint_mask", 00:05:05.167 "notify_get_notifications", 00:05:05.167 "notify_get_types", 00:05:05.167 "spdk_get_version", 00:05:05.167 "rpc_get_methods" 00:05:05.167 ] 00:05:05.167 02:49:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.167 02:49:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.167 02:49:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57990 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57990 ']' 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57990 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57990 00:05:05.167 killing process with pid 57990 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57990' 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57990 00:05:05.167 02:49:35 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57990 00:05:06.544 ************************************ 00:05:06.544 END TEST spdkcli_tcp 00:05:06.544 ************************************ 00:05:06.544 00:05:06.544 real 0m2.498s 00:05:06.544 user 0m4.471s 00:05:06.544 sys 0m0.429s 00:05:06.544 02:49:37 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.544 02:49:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.544 02:49:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.544 02:49:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.544 02:49:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.544 02:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.544 ************************************ 00:05:06.544 START TEST dpdk_mem_utility 00:05:06.544 ************************************ 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.544 * Looking for test storage... 00:05:06.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.544 02:49:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.544 --rc genhtml_branch_coverage=1 00:05:06.544 --rc genhtml_function_coverage=1 00:05:06.544 --rc genhtml_legend=1 00:05:06.544 --rc geninfo_all_blocks=1 00:05:06.544 --rc geninfo_unexecuted_blocks=1 00:05:06.544 00:05:06.544 ' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.544 --rc genhtml_branch_coverage=1 00:05:06.544 --rc genhtml_function_coverage=1 00:05:06.544 --rc genhtml_legend=1 00:05:06.544 --rc geninfo_all_blocks=1 00:05:06.544 --rc geninfo_unexecuted_blocks=1 00:05:06.544 00:05:06.544 ' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.544 --rc genhtml_branch_coverage=1 00:05:06.544 --rc genhtml_function_coverage=1 00:05:06.544 --rc genhtml_legend=1 00:05:06.544 --rc geninfo_all_blocks=1 00:05:06.544 --rc geninfo_unexecuted_blocks=1 00:05:06.544 00:05:06.544 ' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.544 --rc genhtml_branch_coverage=1 00:05:06.544 --rc genhtml_function_coverage=1 00:05:06.544 --rc genhtml_legend=1 00:05:06.544 --rc geninfo_all_blocks=1 00:05:06.544 --rc geninfo_unexecuted_blocks=1 00:05:06.544 00:05:06.544 ' 00:05:06.544 02:49:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.544 02:49:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58096 00:05:06.544 02:49:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.544 02:49:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58096 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58096 ']' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.544 02:49:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 [2024-12-05 02:49:37.390814] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:06.803 [2024-12-05 02:49:37.391113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58096 ] 00:05:06.803 [2024-12-05 02:49:37.542925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.803 [2024-12-05 02:49:37.620620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.369 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.369 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:07.370 02:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.370 02:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.370 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.370 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.370 { 00:05:07.370 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.370 } 00:05:07.370 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.370 02:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.631 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:07.631 1 heaps totaling size 824.000000 MiB 00:05:07.631 size: 824.000000 MiB heap id: 0 00:05:07.631 end heaps---------- 00:05:07.631 9 mempools totaling size 603.782043 MiB 00:05:07.631 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.631 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.631 size: 100.555481 MiB name: bdev_io_58096 00:05:07.631 size: 50.003479 MiB name: msgpool_58096 00:05:07.631 size: 36.509338 MiB name: fsdev_io_58096 00:05:07.631 size: 21.763794 MiB name: PDU_Pool 00:05:07.631 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.631 size: 4.133484 MiB name: evtpool_58096 00:05:07.631 size: 0.026123 MiB name: Session_Pool 00:05:07.631 end mempools------- 00:05:07.631 6 memzones totaling size 4.142822 MiB 00:05:07.631 size: 1.000366 MiB name: RG_ring_0_58096 00:05:07.631 size: 1.000366 MiB name: RG_ring_1_58096 00:05:07.631 size: 1.000366 MiB name: RG_ring_4_58096 00:05:07.631 size: 1.000366 MiB name: RG_ring_5_58096 00:05:07.631 size: 0.125366 MiB name: RG_ring_2_58096 00:05:07.631 size: 0.015991 MiB name: RG_ring_3_58096 00:05:07.631 end memzones------- 00:05:07.631 02:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.631 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:05:07.631 list of free elements. size: 16.781372 MiB 00:05:07.631 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:07.631 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:07.631 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:07.631 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:07.631 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:07.631 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:07.631 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:07.631 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:07.631 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:07.631 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:07.631 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:07.631 element at address: 0x20001b400000 with size: 0.560242 MiB 00:05:07.631 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:07.631 element at address: 0x200019600000 with size: 0.488464 MiB 00:05:07.631 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:07.631 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:07.631 element at address: 0x200028800000 with size: 0.391663 MiB 00:05:07.631 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:07.631 list of standard malloc elements. size: 199.287720 MiB 00:05:07.631 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:07.631 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:07.631 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:07.631 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:07.631 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:07.631 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:07.631 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:07.631 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:07.631 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:07.631 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:07.631 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:07.631 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:07.631 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:07.631 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:07.631 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:07.631 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:07.632 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:07.633 element at address: 0x200028864440 with size: 0.000244 MiB 00:05:07.633 element at address: 0x200028864540 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b200 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:07.633 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:07.634 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:07.634 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:07.634 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:07.634 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:07.634 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:07.634 list of memzone associated elements. size: 607.930908 MiB 00:05:07.634 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:07.634 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.634 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:07.634 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.634 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:07.634 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58096_0 00:05:07.634 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:07.634 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58096_0 00:05:07.634 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:07.634 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58096_0 00:05:07.634 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:07.634 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.634 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:07.634 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.634 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:07.634 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58096_0 00:05:07.634 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:07.634 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58096 00:05:07.634 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:07.634 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58096 00:05:07.634 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:07.634 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.634 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:07.634 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.634 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:07.634 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.634 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:07.634 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.634 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:07.634 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58096 00:05:07.634 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:07.634 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58096 00:05:07.634 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:07.634 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58096 00:05:07.634 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:07.634 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58096 00:05:07.634 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:07.634 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58096 00:05:07.634 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:07.634 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58096 00:05:07.634 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:07.634 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.634 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:07.634 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.634 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:07.634 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.634 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:07.634 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58096 00:05:07.634 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:07.634 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58096 00:05:07.634 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:07.634 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.634 element at address: 0x200028864640 with size: 0.023804 MiB 00:05:07.634 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.634 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:07.634 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58096 00:05:07.634 element at address: 0x20002886a7c0 with size: 0.002502 MiB 00:05:07.634 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.634 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:07.634 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58096 00:05:07.634 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:07.634 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58096 00:05:07.634 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:07.634 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58096 00:05:07.634 element at address: 0x20002886b300 with size: 0.000366 MiB 00:05:07.634 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.634 02:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.634 02:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58096 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58096 ']' 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58096 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58096 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58096' 00:05:07.634 killing process with pid 58096 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58096 00:05:07.634 02:49:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58096 00:05:09.010 00:05:09.010 real 0m2.326s 00:05:09.010 user 0m2.306s 00:05:09.010 sys 0m0.383s 00:05:09.010 ************************************ 00:05:09.010 END TEST dpdk_mem_utility 00:05:09.010 ************************************ 00:05:09.010 02:49:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.010 02:49:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.010 02:49:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:09.010 02:49:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.010 02:49:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.010 02:49:39 -- common/autotest_common.sh@10 -- # set +x 00:05:09.010 ************************************ 00:05:09.010 START TEST event 00:05:09.010 ************************************ 00:05:09.010 02:49:39 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:09.010 * Looking for test storage... 00:05:09.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:09.010 02:49:39 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.010 02:49:39 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.010 02:49:39 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.010 02:49:39 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.010 02:49:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.010 02:49:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.010 02:49:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.010 02:49:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.010 02:49:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.010 02:49:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.010 02:49:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.010 02:49:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.010 02:49:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.010 02:49:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.010 02:49:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.010 02:49:39 event -- scripts/common.sh@344 -- # case "$op" in 00:05:09.010 02:49:39 event -- scripts/common.sh@345 -- # : 1 00:05:09.010 02:49:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.010 02:49:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.010 02:49:39 event -- scripts/common.sh@365 -- # decimal 1 00:05:09.010 02:49:39 event -- scripts/common.sh@353 -- # local d=1 00:05:09.010 02:49:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.010 02:49:39 event -- scripts/common.sh@355 -- # echo 1 00:05:09.010 02:49:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.010 02:49:39 event -- scripts/common.sh@366 -- # decimal 2 00:05:09.010 02:49:39 event -- scripts/common.sh@353 -- # local d=2 00:05:09.010 02:49:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.010 02:49:39 event -- scripts/common.sh@355 -- # echo 2 00:05:09.010 02:49:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.010 02:49:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.010 02:49:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.011 02:49:39 event -- scripts/common.sh@368 -- # return 0 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.011 --rc genhtml_branch_coverage=1 00:05:09.011 --rc genhtml_function_coverage=1 00:05:09.011 --rc genhtml_legend=1 00:05:09.011 --rc geninfo_all_blocks=1 00:05:09.011 --rc geninfo_unexecuted_blocks=1 00:05:09.011 00:05:09.011 ' 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.011 --rc genhtml_branch_coverage=1 00:05:09.011 --rc genhtml_function_coverage=1 00:05:09.011 --rc genhtml_legend=1 00:05:09.011 --rc geninfo_all_blocks=1 00:05:09.011 --rc geninfo_unexecuted_blocks=1 00:05:09.011 00:05:09.011 ' 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.011 --rc genhtml_branch_coverage=1 00:05:09.011 --rc genhtml_function_coverage=1 00:05:09.011 --rc genhtml_legend=1 00:05:09.011 --rc geninfo_all_blocks=1 00:05:09.011 --rc geninfo_unexecuted_blocks=1 00:05:09.011 00:05:09.011 ' 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.011 --rc genhtml_branch_coverage=1 00:05:09.011 --rc genhtml_function_coverage=1 00:05:09.011 --rc genhtml_legend=1 00:05:09.011 --rc geninfo_all_blocks=1 00:05:09.011 --rc geninfo_unexecuted_blocks=1 00:05:09.011 00:05:09.011 ' 00:05:09.011 02:49:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:09.011 02:49:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:09.011 02:49:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:09.011 02:49:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.011 02:49:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.011 ************************************ 00:05:09.011 START TEST event_perf 00:05:09.011 ************************************ 00:05:09.011 02:49:39 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:09.011 Running I/O for 1 seconds...[2024-12-05 02:49:39.690724] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:09.011 [2024-12-05 02:49:39.690897] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58187 ] 00:05:09.011 [2024-12-05 02:49:39.846769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.270 [2024-12-05 02:49:39.931173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.270 [2024-12-05 02:49:39.931557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.270 [2024-12-05 02:49:39.931751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.270 Running I/O for 1 seconds...[2024-12-05 02:49:39.931774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.234 00:05:10.234 lcore 0: 202360 00:05:10.234 lcore 1: 202363 00:05:10.234 lcore 2: 202365 00:05:10.234 lcore 3: 202361 00:05:10.234 done. 00:05:10.234 ************************************ 00:05:10.234 END TEST event_perf 00:05:10.234 ************************************ 00:05:10.234 00:05:10.234 real 0m1.402s 00:05:10.234 user 0m4.208s 00:05:10.234 sys 0m0.076s 00:05:10.234 02:49:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.234 02:49:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.491 02:49:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.491 02:49:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:10.491 02:49:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.491 02:49:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.491 ************************************ 00:05:10.491 START TEST event_reactor 00:05:10.491 ************************************ 00:05:10.491 02:49:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.491 [2024-12-05 02:49:41.130580] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:10.491 [2024-12-05 02:49:41.130798] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58227 ] 00:05:10.491 [2024-12-05 02:49:41.287234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.748 [2024-12-05 02:49:41.363158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.684 test_start 00:05:11.684 oneshot 00:05:11.684 tick 100 00:05:11.684 tick 100 00:05:11.684 tick 250 00:05:11.684 tick 100 00:05:11.684 tick 100 00:05:11.684 tick 100 00:05:11.684 tick 250 00:05:11.684 tick 500 00:05:11.684 tick 100 00:05:11.684 tick 100 00:05:11.684 tick 250 00:05:11.684 tick 100 00:05:11.684 tick 100 00:05:11.684 test_end 00:05:11.684 00:05:11.684 real 0m1.384s 00:05:11.684 user 0m1.207s 00:05:11.684 sys 0m0.069s 00:05:11.684 02:49:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.684 ************************************ 00:05:11.684 END TEST event_reactor 00:05:11.684 ************************************ 00:05:11.684 02:49:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:11.684 02:49:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.684 02:49:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:11.684 02:49:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.684 02:49:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.684 ************************************ 00:05:11.684 START TEST event_reactor_perf 00:05:11.684 ************************************ 00:05:11.684 02:49:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.943 [2024-12-05 02:49:42.553526] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:11.943 [2024-12-05 02:49:42.553745] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58258 ] 00:05:11.943 [2024-12-05 02:49:42.707471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.943 [2024-12-05 02:49:42.785467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.319 test_start 00:05:13.319 test_end 00:05:13.319 Performance: 413400 events per second 00:05:13.319 ************************************ 00:05:13.319 END TEST event_reactor_perf 00:05:13.319 ************************************ 00:05:13.319 00:05:13.319 real 0m1.376s 00:05:13.319 user 0m1.208s 00:05:13.319 sys 0m0.061s 00:05:13.319 02:49:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.319 02:49:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.319 02:49:43 event -- event/event.sh@49 -- # uname -s 00:05:13.319 02:49:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.319 02:49:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.320 02:49:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.320 02:49:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.320 02:49:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.320 ************************************ 00:05:13.320 START TEST event_scheduler 00:05:13.320 ************************************ 00:05:13.320 02:49:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.320 * Looking for test storage... 00:05:13.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:13.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.320 02:49:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.320 --rc genhtml_branch_coverage=1 00:05:13.320 --rc genhtml_function_coverage=1 00:05:13.320 --rc genhtml_legend=1 00:05:13.320 --rc geninfo_all_blocks=1 00:05:13.320 --rc geninfo_unexecuted_blocks=1 00:05:13.320 00:05:13.320 ' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.320 --rc genhtml_branch_coverage=1 00:05:13.320 --rc genhtml_function_coverage=1 00:05:13.320 --rc genhtml_legend=1 00:05:13.320 --rc geninfo_all_blocks=1 00:05:13.320 --rc geninfo_unexecuted_blocks=1 00:05:13.320 00:05:13.320 ' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.320 --rc genhtml_branch_coverage=1 00:05:13.320 --rc genhtml_function_coverage=1 00:05:13.320 --rc genhtml_legend=1 00:05:13.320 --rc geninfo_all_blocks=1 00:05:13.320 --rc geninfo_unexecuted_blocks=1 00:05:13.320 00:05:13.320 ' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.320 --rc genhtml_branch_coverage=1 00:05:13.320 --rc genhtml_function_coverage=1 00:05:13.320 --rc genhtml_legend=1 00:05:13.320 --rc geninfo_all_blocks=1 00:05:13.320 --rc geninfo_unexecuted_blocks=1 00:05:13.320 00:05:13.320 ' 00:05:13.320 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.320 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58334 00:05:13.320 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.320 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58334 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58334 ']' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.320 02:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.320 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.320 [2024-12-05 02:49:44.132799] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:13.320 [2024-12-05 02:49:44.132923] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58334 ] 00:05:13.578 [2024-12-05 02:49:44.294993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.578 [2024-12-05 02:49:44.395169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.578 [2024-12-05 02:49:44.395657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.578 [2024-12-05 02:49:44.396064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.578 [2024-12-05 02:49:44.396104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.149 02:49:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.149 02:49:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:14.149 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.149 02:49:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.149 02:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.149 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.149 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.150 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.150 POWER: Cannot set governor of lcore 0 to performance 00:05:14.150 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.150 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.150 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.150 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.150 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:14.150 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:14.150 POWER: Unable to set Power Management Environment for lcore 0 00:05:14.150 [2024-12-05 02:49:44.974467] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:14.150 [2024-12-05 02:49:44.974488] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:14.150 [2024-12-05 02:49:44.974497] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.150 [2024-12-05 02:49:44.974514] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.150 [2024-12-05 02:49:44.974521] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.150 [2024-12-05 02:49:44.974530] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.150 02:49:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.150 02:49:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.150 02:49:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.150 02:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 [2024-12-05 02:49:45.198387] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:14.412 02:49:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.412 02:49:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:14.412 02:49:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.412 02:49:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.412 02:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 ************************************ 00:05:14.412 START TEST scheduler_create_thread 00:05:14.412 ************************************ 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 2 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 3 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.412 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.413 4 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.413 5 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.413 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.673 6 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 7 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 8 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 9 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 10 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 ************************************ 00:05:14.674 END TEST scheduler_create_thread 00:05:14.674 ************************************ 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.674 00:05:14.674 real 0m0.107s 00:05:14.674 user 0m0.011s 00:05:14.674 sys 0m0.005s 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.674 02:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.674 02:49:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.674 02:49:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58334 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58334 ']' 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58334 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58334 00:05:14.674 killing process with pid 58334 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58334' 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58334 00:05:14.674 02:49:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58334 00:05:15.245 [2024-12-05 02:49:45.803002] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.816 00:05:15.816 real 0m2.595s 00:05:15.816 user 0m4.371s 00:05:15.816 sys 0m0.353s 00:05:15.816 02:49:46 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.816 02:49:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.816 ************************************ 00:05:15.816 END TEST event_scheduler 00:05:15.816 ************************************ 00:05:15.816 02:49:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.816 02:49:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.816 02:49:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.816 02:49:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.816 02:49:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.816 ************************************ 00:05:15.816 START TEST app_repeat 00:05:15.816 ************************************ 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:15.816 Process app_repeat pid: 58407 00:05:15.816 spdk_app_start Round 0 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58407 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58407' 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58407 /var/tmp/spdk-nbd.sock 00:05:15.816 02:49:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58407 ']' 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.816 02:49:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.816 [2024-12-05 02:49:46.622126] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:15.816 [2024-12-05 02:49:46.622203] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58407 ] 00:05:16.074 [2024-12-05 02:49:46.770817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.074 [2024-12-05 02:49:46.851084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.074 [2024-12-05 02:49:46.851125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.011 02:49:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.011 02:49:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.011 02:49:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.011 Malloc0 00:05:17.011 02:49:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.268 Malloc1 00:05:17.268 02:49:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.268 02:49:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.526 /dev/nbd0 00:05:17.526 02:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.526 02:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.526 1+0 records in 00:05:17.526 1+0 records out 00:05:17.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346455 s, 11.8 MB/s 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.526 02:49:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.526 02:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.526 02:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.526 02:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.784 /dev/nbd1 00:05:17.784 02:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.784 02:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.784 1+0 records in 00:05:17.784 1+0 records out 00:05:17.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179234 s, 22.9 MB/s 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.784 02:49:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.785 02:49:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.785 02:49:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.785 02:49:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.785 02:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.785 02:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.785 02:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.785 02:49:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.785 02:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.042 { 00:05:18.042 "nbd_device": "/dev/nbd0", 00:05:18.042 "bdev_name": "Malloc0" 00:05:18.042 }, 00:05:18.042 { 00:05:18.042 "nbd_device": "/dev/nbd1", 00:05:18.042 "bdev_name": "Malloc1" 00:05:18.042 } 00:05:18.042 ]' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.042 { 00:05:18.042 "nbd_device": "/dev/nbd0", 00:05:18.042 "bdev_name": "Malloc0" 00:05:18.042 }, 00:05:18.042 { 00:05:18.042 "nbd_device": "/dev/nbd1", 00:05:18.042 "bdev_name": "Malloc1" 00:05:18.042 } 00:05:18.042 ]' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.042 /dev/nbd1' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.042 /dev/nbd1' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.042 256+0 records in 00:05:18.042 256+0 records out 00:05:18.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120454 s, 87.1 MB/s 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.042 256+0 records in 00:05:18.042 256+0 records out 00:05:18.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138596 s, 75.7 MB/s 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.042 256+0 records in 00:05:18.042 256+0 records out 00:05:18.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166931 s, 62.8 MB/s 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.042 02:49:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.299 02:49:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.557 02:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.815 02:49:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.815 02:49:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.815 02:49:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.380 [2024-12-05 02:49:50.213933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.638 [2024-12-05 02:49:50.288589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.638 [2024-12-05 02:49:50.288811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.638 [2024-12-05 02:49:50.386652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.639 [2024-12-05 02:49:50.386710] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.194 spdk_app_start Round 1 00:05:22.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.194 02:49:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.194 02:49:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.194 02:49:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58407 /var/tmp/spdk-nbd.sock 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58407 ']' 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.194 02:49:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.194 02:49:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.452 Malloc0 00:05:22.452 02:49:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.710 Malloc1 00:05:22.710 02:49:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.710 /dev/nbd0 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.710 1+0 records in 00:05:22.710 1+0 records out 00:05:22.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144067 s, 28.4 MB/s 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.710 02:49:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.710 02:49:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.969 /dev/nbd1 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.969 1+0 records in 00:05:22.969 1+0 records out 00:05:22.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226971 s, 18.0 MB/s 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.969 02:49:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.969 02:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.228 02:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.228 { 00:05:23.228 "nbd_device": "/dev/nbd0", 00:05:23.228 "bdev_name": "Malloc0" 00:05:23.228 }, 00:05:23.228 { 00:05:23.228 "nbd_device": "/dev/nbd1", 00:05:23.228 "bdev_name": "Malloc1" 00:05:23.228 } 00:05:23.228 ]' 00:05:23.228 02:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.228 { 00:05:23.228 "nbd_device": "/dev/nbd0", 00:05:23.228 "bdev_name": "Malloc0" 00:05:23.228 }, 00:05:23.228 { 00:05:23.228 "nbd_device": "/dev/nbd1", 00:05:23.228 "bdev_name": "Malloc1" 00:05:23.228 } 00:05:23.228 ]' 00:05:23.228 02:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.228 /dev/nbd1' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.228 /dev/nbd1' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.228 256+0 records in 00:05:23.228 256+0 records out 00:05:23.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641997 s, 163 MB/s 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.228 256+0 records in 00:05:23.228 256+0 records out 00:05:23.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140079 s, 74.9 MB/s 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.228 256+0 records in 00:05:23.228 256+0 records out 00:05:23.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166608 s, 62.9 MB/s 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.228 02:49:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.487 02:49:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.745 02:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.003 02:49:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.003 02:49:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.261 02:49:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.827 [2024-12-05 02:49:55.568386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.827 [2024-12-05 02:49:55.641000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.827 [2024-12-05 02:49:55.641100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.085 [2024-12-05 02:49:55.739725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.085 [2024-12-05 02:49:55.739779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.622 spdk_app_start Round 2 00:05:27.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.622 02:49:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.622 02:49:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.622 02:49:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58407 /var/tmp/spdk-nbd.sock 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58407 ']' 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.622 02:49:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.622 02:49:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.622 Malloc0 00:05:27.881 02:49:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.881 Malloc1 00:05:27.881 02:49:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.881 02:49:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.139 /dev/nbd0 00:05:28.139 02:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.139 02:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.139 1+0 records in 00:05:28.139 1+0 records out 00:05:28.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224983 s, 18.2 MB/s 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.139 02:49:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.139 02:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.139 02:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.139 02:49:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.397 /dev/nbd1 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.397 1+0 records in 00:05:28.397 1+0 records out 00:05:28.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000125232 s, 32.7 MB/s 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.397 02:49:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.397 02:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.655 { 00:05:28.655 "nbd_device": "/dev/nbd0", 00:05:28.655 "bdev_name": "Malloc0" 00:05:28.655 }, 00:05:28.655 { 00:05:28.655 "nbd_device": "/dev/nbd1", 00:05:28.655 "bdev_name": "Malloc1" 00:05:28.655 } 00:05:28.655 ]' 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.655 { 00:05:28.655 "nbd_device": "/dev/nbd0", 00:05:28.655 "bdev_name": "Malloc0" 00:05:28.655 }, 00:05:28.655 { 00:05:28.655 "nbd_device": "/dev/nbd1", 00:05:28.655 "bdev_name": "Malloc1" 00:05:28.655 } 00:05:28.655 ]' 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.655 /dev/nbd1' 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.655 /dev/nbd1' 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.655 02:49:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.656 256+0 records in 00:05:28.656 256+0 records out 00:05:28.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103744 s, 101 MB/s 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.656 256+0 records in 00:05:28.656 256+0 records out 00:05:28.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180052 s, 58.2 MB/s 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.656 256+0 records in 00:05:28.656 256+0 records out 00:05:28.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171863 s, 61.0 MB/s 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.656 02:49:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.914 02:49:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.173 02:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.431 02:50:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.431 02:50:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.689 02:50:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.256 [2024-12-05 02:50:00.981588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.256 [2024-12-05 02:50:01.060647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.256 [2024-12-05 02:50:01.060871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.514 [2024-12-05 02:50:01.158785] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.514 [2024-12-05 02:50:01.158837] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.045 02:50:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58407 /var/tmp/spdk-nbd.sock 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58407 ']' 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.045 02:50:03 event.app_repeat -- event/event.sh@39 -- # killprocess 58407 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58407 ']' 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58407 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58407 00:05:33.045 killing process with pid 58407 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58407' 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58407 00:05:33.045 02:50:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58407 00:05:33.612 spdk_app_start is called in Round 0. 00:05:33.612 Shutdown signal received, stop current app iteration 00:05:33.612 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:33.612 spdk_app_start is called in Round 1. 00:05:33.612 Shutdown signal received, stop current app iteration 00:05:33.612 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:33.612 spdk_app_start is called in Round 2. 00:05:33.612 Shutdown signal received, stop current app iteration 00:05:33.612 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:33.612 spdk_app_start is called in Round 3. 00:05:33.612 Shutdown signal received, stop current app iteration 00:05:33.612 02:50:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.612 02:50:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:33.612 00:05:33.612 real 0m17.592s 00:05:33.612 user 0m38.622s 00:05:33.612 sys 0m2.015s 00:05:33.612 02:50:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.612 02:50:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.612 ************************************ 00:05:33.612 END TEST app_repeat 00:05:33.612 ************************************ 00:05:33.612 02:50:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.612 02:50:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.612 02:50:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.612 02:50:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.612 02:50:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.612 ************************************ 00:05:33.612 START TEST cpu_locks 00:05:33.612 ************************************ 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.612 * Looking for test storage... 00:05:33.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.612 02:50:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.612 --rc genhtml_branch_coverage=1 00:05:33.612 --rc genhtml_function_coverage=1 00:05:33.612 --rc genhtml_legend=1 00:05:33.612 --rc geninfo_all_blocks=1 00:05:33.612 --rc geninfo_unexecuted_blocks=1 00:05:33.612 00:05:33.612 ' 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.612 --rc genhtml_branch_coverage=1 00:05:33.612 --rc genhtml_function_coverage=1 00:05:33.612 --rc genhtml_legend=1 00:05:33.612 --rc geninfo_all_blocks=1 00:05:33.612 --rc geninfo_unexecuted_blocks=1 00:05:33.612 00:05:33.612 ' 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.612 --rc genhtml_branch_coverage=1 00:05:33.612 --rc genhtml_function_coverage=1 00:05:33.612 --rc genhtml_legend=1 00:05:33.612 --rc geninfo_all_blocks=1 00:05:33.612 --rc geninfo_unexecuted_blocks=1 00:05:33.612 00:05:33.612 ' 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.612 --rc genhtml_branch_coverage=1 00:05:33.612 --rc genhtml_function_coverage=1 00:05:33.612 --rc genhtml_legend=1 00:05:33.612 --rc geninfo_all_blocks=1 00:05:33.612 --rc geninfo_unexecuted_blocks=1 00:05:33.612 00:05:33.612 ' 00:05:33.612 02:50:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.612 02:50:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.612 02:50:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.612 02:50:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.612 02:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.612 ************************************ 00:05:33.612 START TEST default_locks 00:05:33.612 ************************************ 00:05:33.612 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:33.612 02:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58837 00:05:33.612 02:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58837 00:05:33.612 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58837 ']' 00:05:33.613 02:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.613 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.613 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.613 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.613 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.613 02:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.613 [2024-12-05 02:50:04.432547] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:33.613 [2024-12-05 02:50:04.432664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:05:33.871 [2024-12-05 02:50:04.586155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.871 [2024-12-05 02:50:04.663917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.437 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.437 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:34.437 02:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58837 00:05:34.437 02:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58837 00:05:34.437 02:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58837 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58837 ']' 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58837 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58837 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.695 killing process with pid 58837 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58837' 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58837 00:05:34.695 02:50:05 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58837 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58837 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58837 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58837 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58837 ']' 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.071 ERROR: process (pid: 58837) is no longer running 00:05:36.071 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58837) - No such process 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.071 00:05:36.071 real 0m2.339s 00:05:36.071 user 0m2.356s 00:05:36.071 sys 0m0.428s 00:05:36.071 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.071 ************************************ 00:05:36.071 END TEST default_locks 00:05:36.072 ************************************ 00:05:36.072 02:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.072 02:50:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.072 02:50:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.072 02:50:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.072 02:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.072 ************************************ 00:05:36.072 START TEST default_locks_via_rpc 00:05:36.072 ************************************ 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58896 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58896 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58896 ']' 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.072 02:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.072 [2024-12-05 02:50:06.815459] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:36.072 [2024-12-05 02:50:06.815576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:05:36.329 [2024-12-05 02:50:06.973743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.329 [2024-12-05 02:50:07.052337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58896 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58896 00:05:36.894 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58896 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58896 ']' 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58896 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58896 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.153 killing process with pid 58896 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58896' 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58896 00:05:37.153 02:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58896 00:05:38.526 00:05:38.526 real 0m2.337s 00:05:38.526 user 0m2.357s 00:05:38.526 sys 0m0.441s 00:05:38.526 02:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.526 02:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.526 ************************************ 00:05:38.526 END TEST default_locks_via_rpc 00:05:38.526 ************************************ 00:05:38.526 02:50:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:38.526 02:50:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.526 02:50:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.526 02:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.526 ************************************ 00:05:38.526 START TEST non_locking_app_on_locked_coremask 00:05:38.526 ************************************ 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58948 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58948 /var/tmp/spdk.sock 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58948 ']' 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.526 02:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.526 [2024-12-05 02:50:09.198249] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:38.526 [2024-12-05 02:50:09.198369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:05:38.526 [2024-12-05 02:50:09.355784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.785 [2024-12-05 02:50:09.452944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58964 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58964 /var/tmp/spdk2.sock 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58964 ']' 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.360 02:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.360 [2024-12-05 02:50:10.098901] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:39.360 [2024-12-05 02:50:10.099022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:05:39.627 [2024-12-05 02:50:10.270218] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.627 [2024-12-05 02:50:10.270266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.627 [2024-12-05 02:50:10.462485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.002 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.002 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.002 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58948 00:05:41.003 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.003 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58948 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58948 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58948 ']' 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58948 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58948 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.284 killing process with pid 58948 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58948' 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58948 00:05:41.284 02:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58948 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58964 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58964 ']' 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58964 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58964 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.853 killing process with pid 58964 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58964' 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58964 00:05:43.853 02:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58964 00:05:44.794 00:05:44.794 real 0m6.301s 00:05:44.794 user 0m6.548s 00:05:44.794 sys 0m0.815s 00:05:44.794 ************************************ 00:05:44.794 END TEST non_locking_app_on_locked_coremask 00:05:44.794 ************************************ 00:05:44.794 02:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.794 02:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.794 02:50:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.794 02:50:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.794 02:50:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.794 02:50:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.794 ************************************ 00:05:44.794 START TEST locking_app_on_unlocked_coremask 00:05:44.794 ************************************ 00:05:44.794 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:44.794 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59055 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59055 /var/tmp/spdk.sock 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59055 ']' 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.795 02:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.795 [2024-12-05 02:50:15.542724] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:44.795 [2024-12-05 02:50:15.543271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:05:45.055 [2024-12-05 02:50:15.702872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.055 [2024-12-05 02:50:15.702913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.055 [2024-12-05 02:50:15.802329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59071 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59071 /var/tmp/spdk2.sock 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59071 ']' 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.622 02:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 [2024-12-05 02:50:16.462128] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:45.622 [2024-12-05 02:50:16.462241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:05:45.882 [2024-12-05 02:50:16.636436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.141 [2024-12-05 02:50:16.839734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.074 02:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.074 02:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.075 02:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59071 00:05:47.075 02:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59071 00:05:47.075 02:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.332 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59055 00:05:47.333 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59055 ']' 00:05:47.333 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59055 00:05:47.333 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.333 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.333 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59055 00:05:47.590 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.590 killing process with pid 59055 00:05:47.590 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.590 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59055' 00:05:47.590 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59055 00:05:47.590 02:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59055 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59071 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59071 ']' 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59071 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59071 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.117 killing process with pid 59071 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59071' 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59071 00:05:50.117 02:50:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59071 00:05:51.052 00:05:51.052 real 0m6.360s 00:05:51.052 user 0m6.612s 00:05:51.052 sys 0m0.860s 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.052 ************************************ 00:05:51.052 END TEST locking_app_on_unlocked_coremask 00:05:51.052 ************************************ 00:05:51.052 02:50:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.052 02:50:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.052 02:50:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.052 02:50:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.052 ************************************ 00:05:51.052 START TEST locking_app_on_locked_coremask 00:05:51.052 ************************************ 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:51.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59173 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59173 /var/tmp/spdk.sock 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59173 ']' 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.052 02:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.310 [2024-12-05 02:50:21.931005] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:51.310 [2024-12-05 02:50:21.931134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:05:51.310 [2024-12-05 02:50:22.086784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.586 [2024-12-05 02:50:22.163579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59178 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59178 /var/tmp/spdk2.sock 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59178 /var/tmp/spdk2.sock 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59178 /var/tmp/spdk2.sock 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59178 ']' 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.163 02:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.163 [2024-12-05 02:50:22.788252] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:52.163 [2024-12-05 02:50:22.788368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59178 ] 00:05:52.163 [2024-12-05 02:50:22.951760] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59173 has claimed it. 00:05:52.163 [2024-12-05 02:50:22.951804] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.729 ERROR: process (pid: 59178) is no longer running 00:05:52.729 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59178) - No such process 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59173 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59173 00:05:52.729 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59173 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59173 ']' 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59173 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59173 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.988 killing process with pid 59173 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59173' 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59173 00:05:52.988 02:50:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59173 00:05:54.368 00:05:54.368 real 0m2.995s 00:05:54.368 user 0m3.177s 00:05:54.368 sys 0m0.537s 00:05:54.368 02:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.368 02:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 ************************************ 00:05:54.368 END TEST locking_app_on_locked_coremask 00:05:54.368 ************************************ 00:05:54.368 02:50:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.368 02:50:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.368 02:50:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.368 02:50:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 ************************************ 00:05:54.368 START TEST locking_overlapped_coremask 00:05:54.368 ************************************ 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59239 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59239 /var/tmp/spdk.sock 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59239 ']' 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 02:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.368 [2024-12-05 02:50:24.968808] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:54.368 [2024-12-05 02:50:24.968923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59239 ] 00:05:54.368 [2024-12-05 02:50:25.123665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.368 [2024-12-05 02:50:25.205627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.368 [2024-12-05 02:50:25.205956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.368 [2024-12-05 02:50:25.205982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59249 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59249 /var/tmp/spdk2.sock 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59249 /var/tmp/spdk2.sock 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59249 /var/tmp/spdk2.sock 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59249 ']' 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.310 02:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.310 [2024-12-05 02:50:25.870221] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:55.310 [2024-12-05 02:50:25.870335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59249 ] 00:05:55.310 [2024-12-05 02:50:26.044033] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59239 has claimed it. 00:05:55.310 [2024-12-05 02:50:26.048099] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59249) - No such process 00:05:55.880 ERROR: process (pid: 59249) is no longer running 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59239 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59239 ']' 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59239 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59239 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.880 killing process with pid 59239 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59239' 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59239 00:05:55.880 02:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59239 00:05:57.262 00:05:57.262 real 0m2.816s 00:05:57.262 user 0m7.698s 00:05:57.262 sys 0m0.414s 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.262 ************************************ 00:05:57.262 END TEST locking_overlapped_coremask 00:05:57.262 ************************************ 00:05:57.262 02:50:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.262 02:50:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.262 02:50:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.262 02:50:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.262 ************************************ 00:05:57.262 START TEST locking_overlapped_coremask_via_rpc 00:05:57.262 ************************************ 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59302 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59302 /var/tmp/spdk.sock 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.262 02:50:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.262 [2024-12-05 02:50:27.821514] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:57.262 [2024-12-05 02:50:27.821612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:05:57.262 [2024-12-05 02:50:27.968534] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.262 [2024-12-05 02:50:27.968574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.262 [2024-12-05 02:50:28.051974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.262 [2024-12-05 02:50:28.052235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.262 [2024-12-05 02:50:28.052257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59320 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59320 /var/tmp/spdk2.sock 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59320 ']' 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.831 02:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.090 [2024-12-05 02:50:28.726740] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:58.090 [2024-12-05 02:50:28.726866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59320 ] 00:05:58.090 [2024-12-05 02:50:28.890494] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.090 [2024-12-05 02:50:28.890537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.350 [2024-12-05 02:50:29.058920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.350 [2024-12-05 02:50:29.062831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.350 [2024-12-05 02:50:29.062846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.292 [2024-12-05 02:50:30.039233] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59302 has claimed it. 00:05:59.292 request: 00:05:59.292 { 00:05:59.292 "method": "framework_enable_cpumask_locks", 00:05:59.292 "req_id": 1 00:05:59.292 } 00:05:59.292 Got JSON-RPC error response 00:05:59.292 response: 00:05:59.292 { 00:05:59.292 "code": -32603, 00:05:59.292 "message": "Failed to claim CPU core: 2" 00:05:59.292 } 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59302 /var/tmp/spdk.sock 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.292 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59320 /var/tmp/spdk2.sock 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59320 ']' 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.551 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.811 ************************************ 00:05:59.811 END TEST locking_overlapped_coremask_via_rpc 00:05:59.811 ************************************ 00:05:59.811 00:05:59.811 real 0m2.706s 00:05:59.811 user 0m1.066s 00:05:59.811 sys 0m0.128s 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.811 02:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.811 02:50:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.811 02:50:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59302 ]] 00:05:59.811 02:50:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59302 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59302 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59302 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.811 killing process with pid 59302 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59302' 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59302 00:05:59.811 02:50:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59302 00:06:01.189 02:50:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59320 ]] 00:06:01.189 02:50:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59320 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59320 ']' 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59320 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59320 00:06:01.189 killing process with pid 59320 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59320' 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59320 00:06:01.189 02:50:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59320 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59302 ]] 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59302 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59302 00:06:02.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59302) - No such process 00:06:02.562 Process with pid 59302 is not found 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59302 is not found' 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59320 ]] 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59320 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59320 ']' 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59320 00:06:02.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59320) - No such process 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59320 is not found' 00:06:02.562 Process with pid 59320 is not found 00:06:02.562 02:50:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.562 ************************************ 00:06:02.562 END TEST cpu_locks 00:06:02.562 ************************************ 00:06:02.562 00:06:02.562 real 0m28.772s 00:06:02.562 user 0m49.291s 00:06:02.562 sys 0m4.362s 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.562 02:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.562 00:06:02.562 real 0m53.503s 00:06:02.562 user 1m39.060s 00:06:02.562 sys 0m7.157s 00:06:02.562 02:50:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.562 02:50:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.562 ************************************ 00:06:02.562 END TEST event 00:06:02.562 ************************************ 00:06:02.562 02:50:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.562 02:50:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.562 02:50:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.562 02:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.562 ************************************ 00:06:02.562 START TEST thread 00:06:02.562 ************************************ 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.562 * Looking for test storage... 00:06:02.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.562 02:50:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.562 02:50:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.562 02:50:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.562 02:50:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.562 02:50:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.562 02:50:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.562 02:50:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.562 02:50:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.562 02:50:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.562 02:50:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.562 02:50:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.562 02:50:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:02.562 02:50:33 thread -- scripts/common.sh@345 -- # : 1 00:06:02.562 02:50:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.562 02:50:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.562 02:50:33 thread -- scripts/common.sh@365 -- # decimal 1 00:06:02.562 02:50:33 thread -- scripts/common.sh@353 -- # local d=1 00:06:02.562 02:50:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.562 02:50:33 thread -- scripts/common.sh@355 -- # echo 1 00:06:02.562 02:50:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.562 02:50:33 thread -- scripts/common.sh@366 -- # decimal 2 00:06:02.562 02:50:33 thread -- scripts/common.sh@353 -- # local d=2 00:06:02.562 02:50:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.562 02:50:33 thread -- scripts/common.sh@355 -- # echo 2 00:06:02.562 02:50:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.562 02:50:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.562 02:50:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.562 02:50:33 thread -- scripts/common.sh@368 -- # return 0 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.562 --rc genhtml_branch_coverage=1 00:06:02.562 --rc genhtml_function_coverage=1 00:06:02.562 --rc genhtml_legend=1 00:06:02.562 --rc geninfo_all_blocks=1 00:06:02.562 --rc geninfo_unexecuted_blocks=1 00:06:02.562 00:06:02.562 ' 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.562 --rc genhtml_branch_coverage=1 00:06:02.562 --rc genhtml_function_coverage=1 00:06:02.562 --rc genhtml_legend=1 00:06:02.562 --rc geninfo_all_blocks=1 00:06:02.562 --rc geninfo_unexecuted_blocks=1 00:06:02.562 00:06:02.562 ' 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.562 --rc genhtml_branch_coverage=1 00:06:02.562 --rc genhtml_function_coverage=1 00:06:02.562 --rc genhtml_legend=1 00:06:02.562 --rc geninfo_all_blocks=1 00:06:02.562 --rc geninfo_unexecuted_blocks=1 00:06:02.562 00:06:02.562 ' 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.562 --rc genhtml_branch_coverage=1 00:06:02.562 --rc genhtml_function_coverage=1 00:06:02.562 --rc genhtml_legend=1 00:06:02.562 --rc geninfo_all_blocks=1 00:06:02.562 --rc geninfo_unexecuted_blocks=1 00:06:02.562 00:06:02.562 ' 00:06:02.562 02:50:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.562 02:50:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.562 ************************************ 00:06:02.562 START TEST thread_poller_perf 00:06:02.562 ************************************ 00:06:02.562 02:50:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.562 [2024-12-05 02:50:33.236617] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:02.563 [2024-12-05 02:50:33.236719] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59479 ] 00:06:02.563 [2024-12-05 02:50:33.394563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.821 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.821 [2024-12-05 02:50:33.474994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.754 [2024-12-05T02:50:34.598Z] ====================================== 00:06:03.754 [2024-12-05T02:50:34.598Z] busy:2610634444 (cyc) 00:06:03.754 [2024-12-05T02:50:34.598Z] total_run_count: 405000 00:06:03.754 [2024-12-05T02:50:34.598Z] tsc_hz: 2600000000 (cyc) 00:06:03.754 [2024-12-05T02:50:34.598Z] ====================================== 00:06:03.754 [2024-12-05T02:50:34.598Z] poller_cost: 6446 (cyc), 2479 (nsec) 00:06:04.012 00:06:04.012 real 0m1.395s 00:06:04.012 user 0m1.222s 00:06:04.012 sys 0m0.066s 00:06:04.012 02:50:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.012 ************************************ 00:06:04.012 END TEST thread_poller_perf 00:06:04.012 ************************************ 00:06:04.012 02:50:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.012 02:50:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.012 02:50:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:04.012 02:50:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.012 02:50:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.012 ************************************ 00:06:04.012 START TEST thread_poller_perf 00:06:04.012 ************************************ 00:06:04.012 02:50:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.012 [2024-12-05 02:50:34.680296] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:04.012 [2024-12-05 02:50:34.680720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:06:04.012 [2024-12-05 02:50:34.838573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.270 [2024-12-05 02:50:34.920875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.270 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.202 [2024-12-05T02:50:36.046Z] ====================================== 00:06:05.202 [2024-12-05T02:50:36.046Z] busy:2602789564 (cyc) 00:06:05.202 [2024-12-05T02:50:36.046Z] total_run_count: 5280000 00:06:05.202 [2024-12-05T02:50:36.046Z] tsc_hz: 2600000000 (cyc) 00:06:05.202 [2024-12-05T02:50:36.046Z] ====================================== 00:06:05.202 [2024-12-05T02:50:36.046Z] poller_cost: 492 (cyc), 189 (nsec) 00:06:05.202 00:06:05.202 real 0m1.392s 00:06:05.202 user 0m1.217s 00:06:05.202 sys 0m0.068s 00:06:05.460 02:50:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.461 02:50:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.461 ************************************ 00:06:05.461 END TEST thread_poller_perf 00:06:05.461 ************************************ 00:06:05.461 02:50:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.461 ************************************ 00:06:05.461 END TEST thread 00:06:05.461 ************************************ 00:06:05.461 00:06:05.461 real 0m3.018s 00:06:05.461 user 0m2.556s 00:06:05.461 sys 0m0.240s 00:06:05.461 02:50:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.461 02:50:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.461 02:50:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:05.461 02:50:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.461 02:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.461 02:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.461 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.461 ************************************ 00:06:05.461 START TEST app_cmdline 00:06:05.461 ************************************ 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.461 * Looking for test storage... 00:06:05.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.461 02:50:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.461 --rc genhtml_branch_coverage=1 00:06:05.461 --rc genhtml_function_coverage=1 00:06:05.461 --rc genhtml_legend=1 00:06:05.461 --rc geninfo_all_blocks=1 00:06:05.461 --rc geninfo_unexecuted_blocks=1 00:06:05.461 00:06:05.461 ' 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.461 --rc genhtml_branch_coverage=1 00:06:05.461 --rc genhtml_function_coverage=1 00:06:05.461 --rc genhtml_legend=1 00:06:05.461 --rc geninfo_all_blocks=1 00:06:05.461 --rc geninfo_unexecuted_blocks=1 00:06:05.461 00:06:05.461 ' 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.461 --rc genhtml_branch_coverage=1 00:06:05.461 --rc genhtml_function_coverage=1 00:06:05.461 --rc genhtml_legend=1 00:06:05.461 --rc geninfo_all_blocks=1 00:06:05.461 --rc geninfo_unexecuted_blocks=1 00:06:05.461 00:06:05.461 ' 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.461 --rc genhtml_branch_coverage=1 00:06:05.461 --rc genhtml_function_coverage=1 00:06:05.461 --rc genhtml_legend=1 00:06:05.461 --rc geninfo_all_blocks=1 00:06:05.461 --rc geninfo_unexecuted_blocks=1 00:06:05.461 00:06:05.461 ' 00:06:05.461 02:50:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:05.461 02:50:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59595 00:06:05.461 02:50:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59595 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59595 ']' 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.461 02:50:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.461 02:50:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.719 [2024-12-05 02:50:36.352438] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:05.719 [2024-12-05 02:50:36.352559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59595 ] 00:06:05.719 [2024-12-05 02:50:36.506103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.977 [2024-12-05 02:50:36.601811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.543 02:50:37 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.543 02:50:37 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:06.543 02:50:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:06.543 { 00:06:06.543 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:06:06.543 "fields": { 00:06:06.543 "major": 25, 00:06:06.543 "minor": 1, 00:06:06.543 "patch": 0, 00:06:06.543 "suffix": "-pre", 00:06:06.543 "commit": "8d3947977" 00:06:06.543 } 00:06:06.543 } 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:06.800 02:50:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:06.800 02:50:37 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.801 request: 00:06:06.801 { 00:06:06.801 "method": "env_dpdk_get_mem_stats", 00:06:06.801 "req_id": 1 00:06:06.801 } 00:06:06.801 Got JSON-RPC error response 00:06:06.801 response: 00:06:06.801 { 00:06:06.801 "code": -32601, 00:06:06.801 "message": "Method not found" 00:06:06.801 } 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.801 02:50:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59595 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59595 ']' 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59595 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.801 02:50:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59595 00:06:07.059 02:50:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.059 02:50:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.059 02:50:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59595' 00:06:07.059 killing process with pid 59595 00:06:07.059 02:50:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 59595 00:06:07.059 02:50:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 59595 00:06:08.433 00:06:08.433 real 0m2.998s 00:06:08.433 user 0m3.284s 00:06:08.433 sys 0m0.452s 00:06:08.433 02:50:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.433 02:50:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.433 ************************************ 00:06:08.433 END TEST app_cmdline 00:06:08.433 ************************************ 00:06:08.433 02:50:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.433 02:50:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.433 02:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.433 02:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.433 ************************************ 00:06:08.433 START TEST version 00:06:08.433 ************************************ 00:06:08.433 02:50:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:08.433 * Looking for test storage... 00:06:08.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.433 02:50:39 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.433 02:50:39 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.433 02:50:39 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.691 02:50:39 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.691 02:50:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.691 02:50:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.691 02:50:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.691 02:50:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.691 02:50:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.691 02:50:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.691 02:50:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.691 02:50:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.692 02:50:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.692 02:50:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.692 02:50:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.692 02:50:39 version -- scripts/common.sh@344 -- # case "$op" in 00:06:08.692 02:50:39 version -- scripts/common.sh@345 -- # : 1 00:06:08.692 02:50:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.692 02:50:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.692 02:50:39 version -- scripts/common.sh@365 -- # decimal 1 00:06:08.692 02:50:39 version -- scripts/common.sh@353 -- # local d=1 00:06:08.692 02:50:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.692 02:50:39 version -- scripts/common.sh@355 -- # echo 1 00:06:08.692 02:50:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.692 02:50:39 version -- scripts/common.sh@366 -- # decimal 2 00:06:08.692 02:50:39 version -- scripts/common.sh@353 -- # local d=2 00:06:08.692 02:50:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.692 02:50:39 version -- scripts/common.sh@355 -- # echo 2 00:06:08.692 02:50:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.692 02:50:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.692 02:50:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.692 02:50:39 version -- scripts/common.sh@368 -- # return 0 00:06:08.692 02:50:39 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.692 02:50:39 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.692 --rc genhtml_branch_coverage=1 00:06:08.692 --rc genhtml_function_coverage=1 00:06:08.692 --rc genhtml_legend=1 00:06:08.692 --rc geninfo_all_blocks=1 00:06:08.692 --rc geninfo_unexecuted_blocks=1 00:06:08.692 00:06:08.692 ' 00:06:08.692 02:50:39 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.692 --rc genhtml_branch_coverage=1 00:06:08.692 --rc genhtml_function_coverage=1 00:06:08.692 --rc genhtml_legend=1 00:06:08.692 --rc geninfo_all_blocks=1 00:06:08.692 --rc geninfo_unexecuted_blocks=1 00:06:08.692 00:06:08.692 ' 00:06:08.692 02:50:39 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.692 --rc genhtml_branch_coverage=1 00:06:08.692 --rc genhtml_function_coverage=1 00:06:08.692 --rc genhtml_legend=1 00:06:08.692 --rc geninfo_all_blocks=1 00:06:08.692 --rc geninfo_unexecuted_blocks=1 00:06:08.692 00:06:08.692 ' 00:06:08.692 02:50:39 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.692 --rc genhtml_branch_coverage=1 00:06:08.692 --rc genhtml_function_coverage=1 00:06:08.692 --rc genhtml_legend=1 00:06:08.692 --rc geninfo_all_blocks=1 00:06:08.692 --rc geninfo_unexecuted_blocks=1 00:06:08.692 00:06:08.692 ' 00:06:08.692 02:50:39 version -- app/version.sh@17 -- # get_header_version major 00:06:08.692 02:50:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # cut -f2 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.692 02:50:39 version -- app/version.sh@17 -- # major=25 00:06:08.692 02:50:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # cut -f2 00:06:08.692 02:50:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.692 02:50:39 version -- app/version.sh@18 -- # minor=1 00:06:08.692 02:50:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.692 02:50:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # cut -f2 00:06:08.692 02:50:39 version -- app/version.sh@19 -- # patch=0 00:06:08.692 02:50:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.692 02:50:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # cut -f2 00:06:08.692 02:50:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.692 02:50:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.692 02:50:39 version -- app/version.sh@22 -- # version=25.1 00:06:08.692 02:50:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.692 02:50:39 version -- app/version.sh@28 -- # version=25.1rc0 00:06:08.692 02:50:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.692 02:50:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.692 02:50:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:08.692 02:50:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:08.692 ************************************ 00:06:08.692 END TEST version 00:06:08.692 ************************************ 00:06:08.692 00:06:08.692 real 0m0.188s 00:06:08.692 user 0m0.133s 00:06:08.692 sys 0m0.080s 00:06:08.692 02:50:39 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.692 02:50:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.692 02:50:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:08.692 02:50:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:08.692 02:50:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.692 02:50:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.692 02:50:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.692 02:50:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.692 02:50:39 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:08.692 02:50:39 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.692 02:50:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.692 02:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.692 02:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.692 ************************************ 00:06:08.692 START TEST blockdev_nvme 00:06:08.692 ************************************ 00:06:08.692 02:50:39 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:08.692 * Looking for test storage... 00:06:08.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:08.692 02:50:39 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.692 02:50:39 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.692 02:50:39 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.692 02:50:39 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.692 02:50:39 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:08.950 02:50:39 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.950 02:50:39 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.950 02:50:39 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.950 02:50:39 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:08.950 02:50:39 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.950 02:50:39 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.950 --rc genhtml_branch_coverage=1 00:06:08.950 --rc genhtml_function_coverage=1 00:06:08.950 --rc genhtml_legend=1 00:06:08.950 --rc geninfo_all_blocks=1 00:06:08.950 --rc geninfo_unexecuted_blocks=1 00:06:08.950 00:06:08.950 ' 00:06:08.950 02:50:39 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.950 --rc genhtml_branch_coverage=1 00:06:08.951 --rc genhtml_function_coverage=1 00:06:08.951 --rc genhtml_legend=1 00:06:08.951 --rc geninfo_all_blocks=1 00:06:08.951 --rc geninfo_unexecuted_blocks=1 00:06:08.951 00:06:08.951 ' 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.951 --rc genhtml_branch_coverage=1 00:06:08.951 --rc genhtml_function_coverage=1 00:06:08.951 --rc genhtml_legend=1 00:06:08.951 --rc geninfo_all_blocks=1 00:06:08.951 --rc geninfo_unexecuted_blocks=1 00:06:08.951 00:06:08.951 ' 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.951 --rc genhtml_branch_coverage=1 00:06:08.951 --rc genhtml_function_coverage=1 00:06:08.951 --rc genhtml_legend=1 00:06:08.951 --rc geninfo_all_blocks=1 00:06:08.951 --rc geninfo_unexecuted_blocks=1 00:06:08.951 00:06:08.951 ' 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:08.951 02:50:39 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59772 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59772 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59772 ']' 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.951 02:50:39 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.951 02:50:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.951 [2024-12-05 02:50:39.619435] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:08.951 [2024-12-05 02:50:39.619690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59772 ] 00:06:08.951 [2024-12-05 02:50:39.779719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.210 [2024-12-05 02:50:39.877704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.777 02:50:40 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.777 02:50:40 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:09.777 02:50:40 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:09.777 02:50:40 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:09.777 02:50:40 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:09.777 02:50:40 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:09.777 02:50:40 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.777 02:50:40 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:09.777 02:50:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.777 02:50:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.036 02:50:40 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:10.036 02:50:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.295 02:50:40 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.295 02:50:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:10.296 02:50:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d9109cf2-0374-40f2-87f8-9054395008e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d9109cf2-0374-40f2-87f8-9054395008e7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8bab8a09-4818-415f-89a6-7c7c68971e17"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8bab8a09-4818-415f-89a6-7c7c68971e17",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "120cb7a8-22ec-48b0-912c-ba7f2395e089"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "120cb7a8-22ec-48b0-912c-ba7f2395e089",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9038e5e1-7aa6-42f3-a557-d6b47a3ac96f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9038e5e1-7aa6-42f3-a557-d6b47a3ac96f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a3db51c9-1255-47da-a8e1-a77e0eaf76a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3db51c9-1255-47da-a8e1-a77e0eaf76a3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f569bfb7-b682-42db-9929-89afb893593e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f569bfb7-b682-42db-9929-89afb893593e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:10.296 02:50:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:10.296 02:50:40 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:10.296 02:50:40 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:10.296 02:50:40 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:10.296 02:50:40 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59772 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59772 ']' 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59772 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59772 00:06:10.296 killing process with pid 59772 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59772' 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59772 00:06:10.296 02:50:40 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59772 00:06:11.803 02:50:42 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:11.803 02:50:42 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.803 02:50:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:11.803 02:50:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.803 02:50:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.803 ************************************ 00:06:11.803 START TEST bdev_hello_world 00:06:11.803 ************************************ 00:06:11.803 02:50:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:11.803 [2024-12-05 02:50:42.544134] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:11.803 [2024-12-05 02:50:42.544246] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59851 ] 00:06:12.060 [2024-12-05 02:50:42.707007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.061 [2024-12-05 02:50:42.807996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.628 [2024-12-05 02:50:43.347776] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:12.628 [2024-12-05 02:50:43.347952] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:12.628 [2024-12-05 02:50:43.347976] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:12.628 [2024-12-05 02:50:43.350498] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:12.628 [2024-12-05 02:50:43.351106] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:12.628 [2024-12-05 02:50:43.351130] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:12.628 [2024-12-05 02:50:43.351747] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:12.628 00:06:12.628 [2024-12-05 02:50:43.351772] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:13.563 ************************************ 00:06:13.563 END TEST bdev_hello_world 00:06:13.563 ************************************ 00:06:13.563 00:06:13.563 real 0m1.594s 00:06:13.563 user 0m1.306s 00:06:13.563 sys 0m0.180s 00:06:13.563 02:50:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.563 02:50:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:13.563 02:50:44 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:13.563 02:50:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.563 02:50:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.563 02:50:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.563 ************************************ 00:06:13.563 START TEST bdev_bounds 00:06:13.563 ************************************ 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59887 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.563 Process bdevio pid: 59887 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59887' 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59887 00:06:13.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59887 ']' 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.563 02:50:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.563 [2024-12-05 02:50:44.191948] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:13.563 [2024-12-05 02:50:44.192538] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:06:13.563 [2024-12-05 02:50:44.352981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.821 [2024-12-05 02:50:44.455811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.822 [2024-12-05 02:50:44.456136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.822 [2024-12-05 02:50:44.455895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.389 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.389 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:14.389 02:50:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:14.389 I/O targets: 00:06:14.389 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:14.389 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:14.389 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.389 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.389 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:14.389 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:14.389 00:06:14.389 00:06:14.389 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.389 http://cunit.sourceforge.net/ 00:06:14.389 00:06:14.389 00:06:14.389 Suite: bdevio tests on: Nvme3n1 00:06:14.389 Test: blockdev write read block ...passed 00:06:14.389 Test: blockdev write zeroes read block ...passed 00:06:14.389 Test: blockdev write zeroes read no split ...passed 00:06:14.389 Test: blockdev write zeroes read split ...passed 00:06:14.389 Test: blockdev write zeroes read split partial ...passed 00:06:14.389 Test: blockdev reset ...[2024-12-05 02:50:45.171161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:14.389 [2024-12-05 02:50:45.175595] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spasseduccessful. 00:06:14.389 00:06:14.389 Test: blockdev write read 8 blocks ...passed 00:06:14.389 Test: blockdev write read size > 128k ...passed 00:06:14.389 Test: blockdev write read invalid size ...passed 00:06:14.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.389 Test: blockdev write read max offset ...passed 00:06:14.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.389 Test: blockdev writev readv 8 blocks ...passed 00:06:14.389 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.389 Test: blockdev writev readv block ...passed 00:06:14.389 Test: blockdev writev readv size > 128k ...passed 00:06:14.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.389 Test: blockdev comparev and writev ...[2024-12-05 02:50:45.196084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b540a000 len:0x1000 00:06:14.389 [2024-12-05 02:50:45.196132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.389 passed 00:06:14.389 Test: blockdev nvme passthru rw ...passed 00:06:14.389 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.389 Test: blockdev nvme admin passthru ...[2024-12-05 02:50:45.198588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.389 [2024-12-05 02:50:45.198629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.389 passed 00:06:14.389 Test: blockdev copy ...passed 00:06:14.389 Suite: bdevio tests on: Nvme2n3 00:06:14.389 Test: blockdev write read block ...passed 00:06:14.389 Test: blockdev write zeroes read block ...passed 00:06:14.389 Test: blockdev write zeroes read no split ...passed 00:06:14.389 Test: blockdev write zeroes read split ...passed 00:06:14.647 Test: blockdev write zeroes read split partial ...passed 00:06:14.647 Test: blockdev reset ...[2024-12-05 02:50:45.251024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.647 [2024-12-05 02:50:45.258400] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:14.647 Test: blockdev write read 8 blocks ...uccessful. 00:06:14.647 passed 00:06:14.647 Test: blockdev write read size > 128k ...passed 00:06:14.647 Test: blockdev write read invalid size ...passed 00:06:14.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.647 Test: blockdev write read max offset ...passed 00:06:14.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.647 Test: blockdev writev readv 8 blocks ...passed 00:06:14.647 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.647 Test: blockdev writev readv block ...passed 00:06:14.647 Test: blockdev writev readv size > 128k ...passed 00:06:14.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.647 Test: blockdev comparev and writev ...[2024-12-05 02:50:45.278978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x294a06000 len:0x1000 00:06:14.647 [2024-12-05 02:50:45.279027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.647 passed 00:06:14.647 Test: blockdev nvme passthru rw ...passed 00:06:14.647 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:50:45.281544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:14.647 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:14.647 [2024-12-05 02:50:45.281655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.647 passed 00:06:14.647 Test: blockdev copy ...passed 00:06:14.647 Suite: bdevio tests on: Nvme2n2 00:06:14.647 Test: blockdev write read block ...passed 00:06:14.647 Test: blockdev write zeroes read block ...passed 00:06:14.647 Test: blockdev write zeroes read no split ...passed 00:06:14.647 Test: blockdev write zeroes read split ...passed 00:06:14.647 Test: blockdev write zeroes read split partial ...passed 00:06:14.647 Test: blockdev reset ...[2024-12-05 02:50:45.344428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.647 [2024-12-05 02:50:45.348932] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:14.647 passed 00:06:14.647 Test: blockdev write read 8 blocks ...passed 00:06:14.647 Test: blockdev write read size > 128k ...passed 00:06:14.647 Test: blockdev write read invalid size ...passed 00:06:14.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.647 Test: blockdev write read max offset ...passed 00:06:14.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.647 Test: blockdev writev readv 8 blocks ...passed 00:06:14.647 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.647 Test: blockdev writev readv block ...passed 00:06:14.647 Test: blockdev writev readv size > 128k ...passed 00:06:14.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.647 Test: blockdev comparev and writev ...[2024-12-05 02:50:45.364272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd03c000 len:0x1000 00:06:14.647 [2024-12-05 02:50:45.364336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.647 passed 00:06:14.647 Test: blockdev nvme passthru rw ...passed 00:06:14.647 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.647 Test: blockdev nvme admin passthru ...[2024-12-05 02:50:45.366701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.647 [2024-12-05 02:50:45.366734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.647 passed 00:06:14.647 Test: blockdev copy ...passed 00:06:14.647 Suite: bdevio tests on: Nvme2n1 00:06:14.647 Test: blockdev write read block ...passed 00:06:14.648 Test: blockdev write zeroes read block ...passed 00:06:14.648 Test: blockdev write zeroes read no split ...passed 00:06:14.648 Test: blockdev write zeroes read split ...passed 00:06:14.648 Test: blockdev write zeroes read split partial ...passed 00:06:14.648 Test: blockdev reset ...[2024-12-05 02:50:45.426169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:14.648 [2024-12-05 02:50:45.429555] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:14.648 Test: blockdev write read 8 blocks ...uccessful. 00:06:14.648 passed 00:06:14.648 Test: blockdev write read size > 128k ...passed 00:06:14.648 Test: blockdev write read invalid size ...passed 00:06:14.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.648 Test: blockdev write read max offset ...passed 00:06:14.648 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.648 Test: blockdev writev readv 8 blocks ...passed 00:06:14.648 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.648 Test: blockdev writev readv block ...passed 00:06:14.648 Test: blockdev writev readv size > 128k ...passed 00:06:14.648 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.648 Test: blockdev comparev and writev ...[2024-12-05 02:50:45.448421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd038000 len:0x1000 00:06:14.648 [2024-12-05 02:50:45.448464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.648 passed 00:06:14.648 Test: blockdev nvme passthru rw ...passed 00:06:14.648 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:50:45.450937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.648 [2024-12-05 02:50:45.450969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.648 passed 00:06:14.648 Test: blockdev nvme admin passthru ...passed 00:06:14.648 Test: blockdev copy ...passed 00:06:14.648 Suite: bdevio tests on: Nvme1n1 00:06:14.648 Test: blockdev write read block ...passed 00:06:14.648 Test: blockdev write zeroes read block ...passed 00:06:14.648 Test: blockdev write zeroes read no split ...passed 00:06:14.648 Test: blockdev write zeroes read split ...passed 00:06:14.906 Test: blockdev write zeroes read split partial ...passed 00:06:14.906 Test: blockdev reset ...[2024-12-05 02:50:45.508142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:14.906 [2024-12-05 02:50:45.510909] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:06:14.906 00:06:14.906 Test: blockdev write read 8 blocks ...passed 00:06:14.906 Test: blockdev write read size > 128k ...passed 00:06:14.906 Test: blockdev write read invalid size ...passed 00:06:14.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.906 Test: blockdev write read max offset ...passed 00:06:14.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.906 Test: blockdev writev readv 8 blocks ...passed 00:06:14.906 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.906 Test: blockdev writev readv block ...passed 00:06:14.906 Test: blockdev writev readv size > 128k ...passed 00:06:14.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.906 Test: blockdev comparev and writev ...[2024-12-05 02:50:45.528937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd034000 len:0x1000 00:06:14.906 [2024-12-05 02:50:45.528979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:14.906 passed 00:06:14.906 Test: blockdev nvme passthru rw ...passed 00:06:14.906 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.906 Test: blockdev nvme admin passthru ...[2024-12-05 02:50:45.531344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:14.906 [2024-12-05 02:50:45.531377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:14.906 passed 00:06:14.906 Test: blockdev copy ...passed 00:06:14.906 Suite: bdevio tests on: Nvme0n1 00:06:14.906 Test: blockdev write read block ...passed 00:06:14.906 Test: blockdev write zeroes read block ...passed 00:06:14.906 Test: blockdev write zeroes read no split ...passed 00:06:14.906 Test: blockdev write zeroes read split ...passed 00:06:14.906 Test: blockdev write zeroes read split partial ...passed 00:06:14.906 Test: blockdev reset ...[2024-12-05 02:50:45.591311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:14.906 [2024-12-05 02:50:45.594311] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:14.906 Test: blockdev write read 8 blocks ...uccessful. 00:06:14.906 passed 00:06:14.906 Test: blockdev write read size > 128k ...passed 00:06:14.906 Test: blockdev write read invalid size ...passed 00:06:14.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:14.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:14.906 Test: blockdev write read max offset ...passed 00:06:14.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:14.906 Test: blockdev writev readv 8 blocks ...passed 00:06:14.906 Test: blockdev writev readv 30 x 1block ...passed 00:06:14.906 Test: blockdev writev readv block ...passed 00:06:14.906 Test: blockdev writev readv size > 128k ...passed 00:06:14.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:14.906 Test: blockdev comparev and writev ...passed 00:06:14.906 Test: blockdev nvme passthru rw ...[2024-12-05 02:50:45.609608] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:14.906 separate metadata which is not supported yet. 00:06:14.906 passed 00:06:14.906 Test: blockdev nvme passthru vendor specific ...passed 00:06:14.906 Test: blockdev nvme admin passthru ...[2024-12-05 02:50:45.611506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:14.906 [2024-12-05 02:50:45.611548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:14.906 passed 00:06:14.906 Test: blockdev copy ...passed 00:06:14.906 00:06:14.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.906 suites 6 6 n/a 0 0 00:06:14.906 tests 138 138 138 0 0 00:06:14.906 asserts 893 893 893 0 n/a 00:06:14.906 00:06:14.906 Elapsed time = 1.252 seconds 00:06:14.906 0 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59887 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59887 ']' 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59887 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59887 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.906 killing process with pid 59887 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59887' 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59887 00:06:14.906 02:50:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59887 00:06:15.838 02:50:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:15.838 00:06:15.838 real 0m2.221s 00:06:15.838 user 0m5.611s 00:06:15.838 sys 0m0.271s 00:06:15.838 02:50:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.838 ************************************ 00:06:15.838 02:50:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 END TEST bdev_bounds 00:06:15.838 ************************************ 00:06:15.838 02:50:46 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.838 02:50:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:15.838 02:50:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.838 02:50:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 ************************************ 00:06:15.838 START TEST bdev_nbd 00:06:15.838 ************************************ 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59947 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59947 /var/tmp/spdk-nbd.sock 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59947 ']' 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.838 02:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:15.838 [2024-12-05 02:50:46.480042] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:15.838 [2024-12-05 02:50:46.480310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.838 [2024-12-05 02:50:46.632893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.096 [2024-12-05 02:50:46.734136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.662 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.921 1+0 records in 00:06:16.921 1+0 records out 00:06:16.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111318 s, 3.7 MB/s 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:16.921 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.179 1+0 records in 00:06:17.179 1+0 records out 00:06:17.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945022 s, 4.3 MB/s 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.179 02:50:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.179 1+0 records in 00:06:17.179 1+0 records out 00:06:17.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115844 s, 3.5 MB/s 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.179 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.436 1+0 records in 00:06:17.436 1+0 records out 00:06:17.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858178 s, 4.8 MB/s 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.436 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:17.694 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:17.694 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:17.694 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:17.694 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:17.694 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.694 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.695 1+0 records in 00:06:17.695 1+0 records out 00:06:17.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109112 s, 3.8 MB/s 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.695 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.954 1+0 records in 00:06:17.954 1+0 records out 00:06:17.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011348 s, 3.6 MB/s 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.954 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd0", 00:06:18.213 "bdev_name": "Nvme0n1" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd1", 00:06:18.213 "bdev_name": "Nvme1n1" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd2", 00:06:18.213 "bdev_name": "Nvme2n1" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd3", 00:06:18.213 "bdev_name": "Nvme2n2" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd4", 00:06:18.213 "bdev_name": "Nvme2n3" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd5", 00:06:18.213 "bdev_name": "Nvme3n1" 00:06:18.213 } 00:06:18.213 ]' 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd0", 00:06:18.213 "bdev_name": "Nvme0n1" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd1", 00:06:18.213 "bdev_name": "Nvme1n1" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd2", 00:06:18.213 "bdev_name": "Nvme2n1" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd3", 00:06:18.213 "bdev_name": "Nvme2n2" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd4", 00:06:18.213 "bdev_name": "Nvme2n3" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd5", 00:06:18.213 "bdev_name": "Nvme3n1" 00:06:18.213 } 00:06:18.213 ]' 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.213 02:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.471 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.729 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.988 02:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.246 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.505 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:19.763 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.764 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:20.022 /dev/nbd0 00:06:20.022 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.022 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.022 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:20.022 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.022 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.022 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.023 1+0 records in 00:06:20.023 1+0 records out 00:06:20.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955934 s, 4.3 MB/s 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.023 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:20.281 /dev/nbd1 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.281 1+0 records in 00:06:20.281 1+0 records out 00:06:20.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105686 s, 3.9 MB/s 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.281 02:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:20.541 /dev/nbd10 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.541 1+0 records in 00:06:20.541 1+0 records out 00:06:20.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113555 s, 3.6 MB/s 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.541 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:20.801 /dev/nbd11 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.801 1+0 records in 00:06:20.801 1+0 records out 00:06:20.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001553 s, 2.6 MB/s 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.801 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:20.801 /dev/nbd12 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.060 1+0 records in 00:06:21.060 1+0 records out 00:06:21.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100487 s, 4.1 MB/s 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:21.060 /dev/nbd13 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.060 1+0 records in 00:06:21.060 1+0 records out 00:06:21.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352068 s, 11.6 MB/s 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.060 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.369 02:50:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd0", 00:06:21.369 "bdev_name": "Nvme0n1" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd1", 00:06:21.369 "bdev_name": "Nvme1n1" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd10", 00:06:21.369 "bdev_name": "Nvme2n1" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd11", 00:06:21.369 "bdev_name": "Nvme2n2" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd12", 00:06:21.369 "bdev_name": "Nvme2n3" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd13", 00:06:21.369 "bdev_name": "Nvme3n1" 00:06:21.369 } 00:06:21.369 ]' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd0", 00:06:21.369 "bdev_name": "Nvme0n1" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd1", 00:06:21.369 "bdev_name": "Nvme1n1" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd10", 00:06:21.369 "bdev_name": "Nvme2n1" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd11", 00:06:21.369 "bdev_name": "Nvme2n2" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd12", 00:06:21.369 "bdev_name": "Nvme2n3" 00:06:21.369 }, 00:06:21.369 { 00:06:21.369 "nbd_device": "/dev/nbd13", 00:06:21.369 "bdev_name": "Nvme3n1" 00:06:21.369 } 00:06:21.369 ]' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.369 /dev/nbd1 00:06:21.369 /dev/nbd10 00:06:21.369 /dev/nbd11 00:06:21.369 /dev/nbd12 00:06:21.369 /dev/nbd13' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.369 /dev/nbd1 00:06:21.369 /dev/nbd10 00:06:21.369 /dev/nbd11 00:06:21.369 /dev/nbd12 00:06:21.369 /dev/nbd13' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:21.369 256+0 records in 00:06:21.369 256+0 records out 00:06:21.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0094296 s, 111 MB/s 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.369 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.640 256+0 records in 00:06:21.640 256+0 records out 00:06:21.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.212588 s, 4.9 MB/s 00:06:21.640 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.640 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.898 256+0 records in 00:06:21.898 256+0 records out 00:06:21.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202906 s, 5.2 MB/s 00:06:21.898 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.898 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:22.155 256+0 records in 00:06:22.155 256+0 records out 00:06:22.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216125 s, 4.9 MB/s 00:06:22.155 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.155 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:22.155 256+0 records in 00:06:22.155 256+0 records out 00:06:22.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169902 s, 6.2 MB/s 00:06:22.155 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.155 02:50:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:22.413 256+0 records in 00:06:22.413 256+0 records out 00:06:22.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183219 s, 5.7 MB/s 00:06:22.413 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.413 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:22.672 256+0 records in 00:06:22.672 256+0 records out 00:06:22.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176892 s, 5.9 MB/s 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.672 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.931 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.190 02:50:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.190 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.449 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.708 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.968 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:24.228 02:50:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:24.487 malloc_lvol_verify 00:06:24.487 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:24.487 aa962a19-cfdd-4ab6-a9aa-0c1bc85910fd 00:06:24.487 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:24.746 86cf4f0f-8708-4a4e-96b7-a60b231184bf 00:06:24.746 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:25.006 /dev/nbd0 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:25.006 mke2fs 1.47.0 (5-Feb-2023) 00:06:25.006 Discarding device blocks: 0/4096 done 00:06:25.006 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:25.006 00:06:25.006 Allocating group tables: 0/1 done 00:06:25.006 Writing inode tables: 0/1 done 00:06:25.006 Creating journal (1024 blocks): done 00:06:25.006 Writing superblocks and filesystem accounting information: 0/1 done 00:06:25.006 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.006 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.266 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.266 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.266 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.266 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.266 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.266 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59947 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59947 ']' 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59947 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.267 02:50:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59947 00:06:25.267 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.267 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.267 killing process with pid 59947 00:06:25.267 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59947' 00:06:25.267 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59947 00:06:25.267 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59947 00:06:26.211 02:50:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:26.211 00:06:26.211 real 0m10.463s 00:06:26.211 user 0m14.360s 00:06:26.211 sys 0m3.334s 00:06:26.211 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.211 02:50:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:26.211 ************************************ 00:06:26.211 END TEST bdev_nbd 00:06:26.211 ************************************ 00:06:26.211 skipping fio tests on NVMe due to multi-ns failures. 00:06:26.211 02:50:56 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:26.211 02:50:56 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:26.211 02:50:56 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:26.211 02:50:56 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:26.211 02:50:56 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:26.211 02:50:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:26.211 02:50:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.211 02:50:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.211 ************************************ 00:06:26.211 START TEST bdev_verify 00:06:26.211 ************************************ 00:06:26.211 02:50:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:26.211 [2024-12-05 02:50:57.021871] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:26.211 [2024-12-05 02:50:57.022027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:06:26.472 [2024-12-05 02:50:57.189215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.733 [2024-12-05 02:50:57.319432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.733 [2024-12-05 02:50:57.319530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.305 Running I/O for 5 seconds... 00:06:29.633 19456.00 IOPS, 76.00 MiB/s [2024-12-05T02:51:01.414Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-05T02:51:02.406Z] 18709.33 IOPS, 73.08 MiB/s [2024-12-05T02:51:03.336Z] 18880.00 IOPS, 73.75 MiB/s [2024-12-05T02:51:03.336Z] 18867.20 IOPS, 73.70 MiB/s 00:06:32.492 Latency(us) 00:06:32.492 [2024-12-05T02:51:03.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.492 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x0 length 0xbd0bd 00:06:32.492 Nvme0n1 : 5.04 1548.60 6.05 0.00 0.00 82357.53 15930.29 81466.29 00:06:32.492 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:32.492 Nvme0n1 : 5.06 1556.84 6.08 0.00 0.00 81848.45 9880.81 84289.38 00:06:32.492 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x0 length 0xa0000 00:06:32.492 Nvme1n1 : 5.04 1548.16 6.05 0.00 0.00 82210.07 17341.83 73803.62 00:06:32.492 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0xa0000 length 0xa0000 00:06:32.492 Nvme1n1 : 5.06 1555.82 6.08 0.00 0.00 81766.40 11998.13 81062.99 00:06:32.492 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x0 length 0x80000 00:06:32.492 Nvme2n1 : 5.06 1555.12 6.07 0.00 0.00 81763.88 6654.42 72997.02 00:06:32.492 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x80000 length 0x80000 00:06:32.492 Nvme2n1 : 5.07 1564.49 6.11 0.00 0.00 81265.57 10485.76 75416.81 00:06:32.492 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x0 length 0x80000 00:06:32.492 Nvme2n2 : 5.06 1554.28 6.07 0.00 0.00 81659.69 7864.32 69367.34 00:06:32.492 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x80000 length 0x80000 00:06:32.492 Nvme2n2 : 5.07 1564.07 6.11 0.00 0.00 81134.76 10687.41 67754.14 00:06:32.492 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x0 length 0x80000 00:06:32.492 Nvme2n3 : 5.08 1563.21 6.11 0.00 0.00 81162.99 9124.63 67350.84 00:06:32.492 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x80000 length 0x80000 00:06:32.492 Nvme2n3 : 5.08 1563.63 6.11 0.00 0.00 81010.72 10989.88 64124.46 00:06:32.492 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x0 length 0x20000 00:06:32.492 Nvme3n1 : 5.08 1562.08 6.10 0.00 0.00 81048.85 11897.30 68157.44 00:06:32.492 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.492 Verification LBA range: start 0x20000 length 0x20000 00:06:32.492 Nvme3n1 : 5.08 1563.17 6.11 0.00 0.00 80913.03 11292.36 66544.25 00:06:32.492 [2024-12-05T02:51:03.336Z] =================================================================================================================== 00:06:32.492 [2024-12-05T02:51:03.336Z] Total : 18699.46 73.04 0.00 0.00 81509.07 6654.42 84289.38 00:06:33.423 00:06:33.423 real 0m7.278s 00:06:33.423 user 0m13.473s 00:06:33.423 sys 0m0.303s 00:06:33.423 02:51:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.423 ************************************ 00:06:33.423 END TEST bdev_verify 00:06:33.423 ************************************ 00:06:33.423 02:51:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.679 02:51:04 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:33.679 02:51:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:33.679 02:51:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.679 02:51:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.679 ************************************ 00:06:33.679 START TEST bdev_verify_big_io 00:06:33.679 ************************************ 00:06:33.679 02:51:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:33.679 [2024-12-05 02:51:04.357885] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:33.679 [2024-12-05 02:51:04.358010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:06:33.679 [2024-12-05 02:51:04.519511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.935 [2024-12-05 02:51:04.623692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.935 [2024-12-05 02:51:04.623768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.498 Running I/O for 5 seconds... 00:06:38.421 144.00 IOPS, 9.00 MiB/s [2024-12-05T02:51:10.641Z] 1450.50 IOPS, 90.66 MiB/s [2024-12-05T02:51:11.576Z] 1666.00 IOPS, 104.12 MiB/s [2024-12-05T02:51:11.576Z] 1946.75 IOPS, 121.67 MiB/s 00:06:40.732 Latency(us) 00:06:40.732 [2024-12-05T02:51:11.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.732 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x0 length 0xbd0b 00:06:40.732 Nvme0n1 : 5.52 118.75 7.42 0.00 0.00 1016964.94 22383.06 1096971.82 00:06:40.732 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:40.732 Nvme0n1 : 5.68 115.57 7.22 0.00 0.00 1062184.10 14821.22 1116330.14 00:06:40.732 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x0 length 0xa000 00:06:40.732 Nvme1n1 : 5.70 130.62 8.16 0.00 0.00 918685.61 47992.52 1019538.51 00:06:40.732 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0xa000 length 0xa000 00:06:40.732 Nvme1n1 : 5.69 116.77 7.30 0.00 0.00 1015896.68 90742.15 909841.33 00:06:40.732 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x0 length 0x8000 00:06:40.732 Nvme2n1 : 5.70 130.60 8.16 0.00 0.00 886585.22 49202.41 1038896.84 00:06:40.732 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x8000 length 0x8000 00:06:40.732 Nvme2n1 : 5.80 121.42 7.59 0.00 0.00 945965.76 104051.00 922746.88 00:06:40.732 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x0 length 0x8000 00:06:40.732 Nvme2n2 : 5.79 136.75 8.55 0.00 0.00 819595.91 83482.78 1058255.16 00:06:40.732 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x8000 length 0x8000 00:06:40.732 Nvme2n2 : 5.91 126.91 7.93 0.00 0.00 880858.55 36095.21 1426063.36 00:06:40.732 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x0 length 0x8000 00:06:40.732 Nvme2n3 : 5.91 151.97 9.50 0.00 0.00 718650.01 32465.53 1084066.26 00:06:40.732 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x8000 length 0x8000 00:06:40.732 Nvme2n3 : 5.93 126.43 7.90 0.00 0.00 859035.20 18753.38 2051982.57 00:06:40.732 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x0 length 0x2000 00:06:40.732 Nvme3n1 : 5.93 172.70 10.79 0.00 0.00 616300.91 466.31 1109877.37 00:06:40.732 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.732 Verification LBA range: start 0x2000 length 0x2000 00:06:40.732 Nvme3n1 : 5.98 157.31 9.83 0.00 0.00 669727.69 460.01 2077793.67 00:06:40.732 [2024-12-05T02:51:11.576Z] =================================================================================================================== 00:06:40.732 [2024-12-05T02:51:11.576Z] Total : 1605.80 100.36 0.00 0.00 848268.41 460.01 2077793.67 00:06:42.133 00:06:42.133 real 0m8.621s 00:06:42.133 user 0m16.338s 00:06:42.133 sys 0m0.244s 00:06:42.133 02:51:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.133 ************************************ 00:06:42.133 END TEST bdev_verify_big_io 00:06:42.133 ************************************ 00:06:42.133 02:51:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:42.133 02:51:12 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.133 02:51:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.133 02:51:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.133 02:51:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.133 ************************************ 00:06:42.133 START TEST bdev_write_zeroes 00:06:42.133 ************************************ 00:06:42.133 02:51:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.392 [2024-12-05 02:51:13.029655] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:42.392 [2024-12-05 02:51:13.029778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:06:42.392 [2024-12-05 02:51:13.185278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.653 [2024-12-05 02:51:13.274436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.223 Running I/O for 1 seconds... 00:06:44.159 63680.00 IOPS, 248.75 MiB/s 00:06:44.159 Latency(us) 00:06:44.159 [2024-12-05T02:51:15.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.159 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.159 Nvme0n1 : 1.02 10624.34 41.50 0.00 0.00 12023.90 5016.02 25508.63 00:06:44.159 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.159 Nvme1n1 : 1.02 10612.13 41.45 0.00 0.00 12026.90 7360.20 20769.87 00:06:44.159 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.159 Nvme2n1 : 1.02 10599.81 41.41 0.00 0.00 12012.60 7511.43 21072.34 00:06:44.159 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.159 Nvme2n2 : 1.02 10587.77 41.36 0.00 0.00 11998.46 7360.20 21173.17 00:06:44.159 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.159 Nvme2n3 : 1.02 10575.54 41.31 0.00 0.00 11983.42 7309.78 20467.40 00:06:44.159 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.159 Nvme3n1 : 1.02 10501.04 41.02 0.00 0.00 12032.07 7360.20 20366.57 00:06:44.159 [2024-12-05T02:51:15.003Z] =================================================================================================================== 00:06:44.159 [2024-12-05T02:51:15.003Z] Total : 63500.62 248.05 0.00 0.00 12012.87 5016.02 25508.63 00:06:45.097 00:06:45.097 real 0m2.637s 00:06:45.097 user 0m2.345s 00:06:45.097 sys 0m0.178s 00:06:45.097 02:51:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.097 02:51:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:45.097 ************************************ 00:06:45.097 END TEST bdev_write_zeroes 00:06:45.097 ************************************ 00:06:45.097 02:51:15 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.097 02:51:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.097 02:51:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.097 02:51:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.097 ************************************ 00:06:45.097 START TEST bdev_json_nonenclosed 00:06:45.097 ************************************ 00:06:45.097 02:51:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.097 [2024-12-05 02:51:15.735220] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:45.097 [2024-12-05 02:51:15.735365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60592 ] 00:06:45.097 [2024-12-05 02:51:15.900794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.355 [2024-12-05 02:51:16.037954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.355 [2024-12-05 02:51:16.038135] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:45.355 [2024-12-05 02:51:16.038174] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.355 [2024-12-05 02:51:16.038191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.615 00:06:45.615 real 0m0.576s 00:06:45.615 user 0m0.356s 00:06:45.615 sys 0m0.114s 00:06:45.615 02:51:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.615 ************************************ 00:06:45.615 END TEST bdev_json_nonenclosed 00:06:45.615 ************************************ 00:06:45.615 02:51:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:45.615 02:51:16 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.615 02:51:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.615 02:51:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.615 02:51:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.615 ************************************ 00:06:45.615 START TEST bdev_json_nonarray 00:06:45.615 ************************************ 00:06:45.615 02:51:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.615 [2024-12-05 02:51:16.372895] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:45.615 [2024-12-05 02:51:16.373065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:06:45.875 [2024-12-05 02:51:16.536521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.875 [2024-12-05 02:51:16.670673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.875 [2024-12-05 02:51:16.670796] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:45.875 [2024-12-05 02:51:16.670817] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.875 [2024-12-05 02:51:16.670828] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.137 00:06:46.137 real 0m0.570s 00:06:46.137 user 0m0.354s 00:06:46.137 sys 0m0.109s 00:06:46.137 02:51:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.137 ************************************ 00:06:46.137 END TEST bdev_json_nonarray 00:06:46.137 ************************************ 00:06:46.137 02:51:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:46.137 02:51:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:46.137 00:06:46.137 real 0m37.518s 00:06:46.137 user 0m57.346s 00:06:46.137 sys 0m5.459s 00:06:46.137 02:51:16 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.137 02:51:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.137 ************************************ 00:06:46.137 END TEST blockdev_nvme 00:06:46.137 ************************************ 00:06:46.137 02:51:16 -- spdk/autotest.sh@209 -- # uname -s 00:06:46.137 02:51:16 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:46.137 02:51:16 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:46.137 02:51:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.137 02:51:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.137 02:51:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.398 ************************************ 00:06:46.398 START TEST blockdev_nvme_gpt 00:06:46.398 ************************************ 00:06:46.398 02:51:16 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:46.398 * Looking for test storage... 00:06:46.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.398 02:51:17 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.398 --rc genhtml_branch_coverage=1 00:06:46.398 --rc genhtml_function_coverage=1 00:06:46.398 --rc genhtml_legend=1 00:06:46.398 --rc geninfo_all_blocks=1 00:06:46.398 --rc geninfo_unexecuted_blocks=1 00:06:46.398 00:06:46.398 ' 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.398 --rc genhtml_branch_coverage=1 00:06:46.398 --rc genhtml_function_coverage=1 00:06:46.398 --rc genhtml_legend=1 00:06:46.398 --rc geninfo_all_blocks=1 00:06:46.398 --rc geninfo_unexecuted_blocks=1 00:06:46.398 00:06:46.398 ' 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.398 --rc genhtml_branch_coverage=1 00:06:46.398 --rc genhtml_function_coverage=1 00:06:46.398 --rc genhtml_legend=1 00:06:46.398 --rc geninfo_all_blocks=1 00:06:46.398 --rc geninfo_unexecuted_blocks=1 00:06:46.398 00:06:46.398 ' 00:06:46.398 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.398 --rc genhtml_branch_coverage=1 00:06:46.398 --rc genhtml_function_coverage=1 00:06:46.398 --rc genhtml_legend=1 00:06:46.398 --rc geninfo_all_blocks=1 00:06:46.398 --rc geninfo_unexecuted_blocks=1 00:06:46.399 00:06:46.399 ' 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60696 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60696 00:06:46.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.399 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60696 ']' 00:06:46.399 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.399 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.399 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.399 02:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:46.399 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.399 02:51:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.660 [2024-12-05 02:51:17.251290] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:46.660 [2024-12-05 02:51:17.251448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60696 ] 00:06:46.660 [2024-12-05 02:51:17.414585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.921 [2024-12-05 02:51:17.558447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.492 02:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.492 02:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:47.492 02:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:47.492 02:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:47.492 02:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:47.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.015 Waiting for block devices as requested 00:06:48.015 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.015 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.275 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.275 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.551 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:53.551 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:53.552 BYT; 00:06:53.552 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:53.552 BYT; 00:06:53.552 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.552 02:51:24 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.552 02:51:24 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:54.487 The operation has completed successfully. 00:06:54.487 02:51:25 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:55.869 The operation has completed successfully. 00:06:55.869 02:51:26 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:55.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:56.441 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.441 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.441 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.441 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:56.441 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.441 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.441 [] 00:06:56.441 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.441 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:56.441 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:56.441 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:56.441 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.701 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:56.701 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.701 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.962 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:56.962 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:56.963 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ba0d70e4-08ba-461a-9767-20d25908ee2b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ba0d70e4-08ba-461a-9767-20d25908ee2b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1e97ff88-6ddb-4c14-be97-9347509fc5a7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e97ff88-6ddb-4c14-be97-9347509fc5a7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "14a9483e-2c9a-4390-9226-4906cb624f60"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "14a9483e-2c9a-4390-9226-4906cb624f60",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4066ac4c-7e99-4796-a101-745086a37a7a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4066ac4c-7e99-4796-a101-745086a37a7a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4164b3b2-9172-41b9-8d12-97f2b969a7d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4164b3b2-9172-41b9-8d12-97f2b969a7d4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:56.963 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:56.963 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:56.963 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:56.963 02:51:27 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60696 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60696 ']' 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60696 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60696 00:06:56.963 killing process with pid 60696 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60696' 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60696 00:06:56.963 02:51:27 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60696 00:06:58.864 02:51:29 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:58.864 02:51:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.864 02:51:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:58.864 02:51:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.864 02:51:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 ************************************ 00:06:58.864 START TEST bdev_hello_world 00:06:58.864 ************************************ 00:06:58.864 02:51:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.864 [2024-12-05 02:51:29.417855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:58.864 [2024-12-05 02:51:29.418144] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61324 ] 00:06:58.864 [2024-12-05 02:51:29.579091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.864 [2024-12-05 02:51:29.680651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.428 [2024-12-05 02:51:30.226684] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:59.428 [2024-12-05 02:51:30.226741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:59.428 [2024-12-05 02:51:30.226764] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:59.428 [2024-12-05 02:51:30.229213] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:59.428 [2024-12-05 02:51:30.229937] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:59.428 [2024-12-05 02:51:30.229963] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:59.428 [2024-12-05 02:51:30.230638] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:59.428 00:06:59.428 [2024-12-05 02:51:30.230666] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:00.360 ************************************ 00:07:00.360 END TEST bdev_hello_world 00:07:00.360 ************************************ 00:07:00.360 00:07:00.360 real 0m1.605s 00:07:00.360 user 0m1.325s 00:07:00.360 sys 0m0.173s 00:07:00.360 02:51:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.360 02:51:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:00.360 02:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:00.360 02:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.360 02:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.360 02:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.360 ************************************ 00:07:00.360 START TEST bdev_bounds 00:07:00.360 ************************************ 00:07:00.360 Process bdevio pid: 61361 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61361 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61361' 00:07:00.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61361 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61361 ']' 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.360 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.360 [2024-12-05 02:51:31.065043] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:00.360 [2024-12-05 02:51:31.065182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61361 ] 00:07:00.618 [2024-12-05 02:51:31.222322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.618 [2024-12-05 02:51:31.306337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.618 [2024-12-05 02:51:31.306600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.618 [2024-12-05 02:51:31.306627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.189 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.189 02:51:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:01.189 02:51:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:01.189 I/O targets: 00:07:01.189 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:01.189 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:01.189 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:01.189 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.189 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.189 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.189 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:01.189 00:07:01.189 00:07:01.189 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.189 http://cunit.sourceforge.net/ 00:07:01.189 00:07:01.189 00:07:01.189 Suite: bdevio tests on: Nvme3n1 00:07:01.189 Test: blockdev write read block ...passed 00:07:01.189 Test: blockdev write zeroes read block ...passed 00:07:01.189 Test: blockdev write zeroes read no split ...passed 00:07:01.189 Test: blockdev write zeroes read split ...passed 00:07:01.451 Test: blockdev write zeroes read split partial ...passed 00:07:01.451 Test: blockdev reset ...[2024-12-05 02:51:32.034600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:01.451 [2024-12-05 02:51:32.037758] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spasseduccessful. 00:07:01.451 00:07:01.451 Test: blockdev write read 8 blocks ...passed 00:07:01.451 Test: blockdev write read size > 128k ...passed 00:07:01.451 Test: blockdev write read invalid size ...passed 00:07:01.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.451 Test: blockdev write read max offset ...passed 00:07:01.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.451 Test: blockdev writev readv 8 blocks ...passed 00:07:01.451 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.451 Test: blockdev writev readv block ...passed 00:07:01.451 Test: blockdev writev readv size > 128k ...passed 00:07:01.451 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.451 Test: blockdev comparev and writev ...[2024-12-05 02:51:32.047840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2c04000 len:0x1000 00:07:01.451 [2024-12-05 02:51:32.047995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.451 passed 00:07:01.451 Test: blockdev nvme passthru rw ...passed 00:07:01.451 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:51:32.049090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.451 [2024-12-05 02:51:32.049199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:01.451 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:01.451 passed 00:07:01.451 Test: blockdev copy ...passed 00:07:01.451 Suite: bdevio tests on: Nvme2n3 00:07:01.451 Test: blockdev write read block ...passed 00:07:01.451 Test: blockdev write zeroes read block ...passed 00:07:01.451 Test: blockdev write zeroes read no split ...passed 00:07:01.451 Test: blockdev write zeroes read split ...passed 00:07:01.451 Test: blockdev write zeroes read split partial ...passed 00:07:01.451 Test: blockdev reset ...[2024-12-05 02:51:32.103185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.451 passed 00:07:01.451 Test: blockdev write read 8 blocks ...[2024-12-05 02:51:32.106532] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.451 passed 00:07:01.451 Test: blockdev write read size > 128k ...passed 00:07:01.451 Test: blockdev write read invalid size ...passed 00:07:01.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.451 Test: blockdev write read max offset ...passed 00:07:01.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.451 Test: blockdev writev readv 8 blocks ...passed 00:07:01.451 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.451 Test: blockdev writev readv block ...passed 00:07:01.451 Test: blockdev writev readv size > 128k ...passed 00:07:01.451 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.451 Test: blockdev comparev and writev ...[2024-12-05 02:51:32.117080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:01.451 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b2c02000 len:0x1000 00:07:01.451 [2024-12-05 02:51:32.117202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.451 passed 00:07:01.451 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:51:32.118562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.451 [2024-12-05 02:51:32.118591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.451 passed 00:07:01.451 Test: blockdev nvme admin passthru ...passed 00:07:01.451 Test: blockdev copy ...passed 00:07:01.451 Suite: bdevio tests on: Nvme2n2 00:07:01.451 Test: blockdev write read block ...passed 00:07:01.451 Test: blockdev write zeroes read block ...passed 00:07:01.451 Test: blockdev write zeroes read no split ...passed 00:07:01.451 Test: blockdev write zeroes read split ...passed 00:07:01.451 Test: blockdev write zeroes read split partial ...passed 00:07:01.451 Test: blockdev reset ...[2024-12-05 02:51:32.175313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.451 [2024-12-05 02:51:32.179397] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:01.451 Test: blockdev write read 8 blocks ...uccessful. 00:07:01.451 passed 00:07:01.451 Test: blockdev write read size > 128k ...passed 00:07:01.451 Test: blockdev write read invalid size ...passed 00:07:01.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.451 Test: blockdev write read max offset ...passed 00:07:01.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.451 Test: blockdev writev readv 8 blocks ...passed 00:07:01.451 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.451 Test: blockdev writev readv block ...passed 00:07:01.451 Test: blockdev writev readv size > 128k ...passed 00:07:01.451 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.451 Test: blockdev comparev and writev ...[2024-12-05 02:51:32.196476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:01.451 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cee38000 len:0x1000 00:07:01.451 [2024-12-05 02:51:32.196597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.451 passed 00:07:01.451 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.451 Test: blockdev nvme admin passthru ...[2024-12-05 02:51:32.198724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.451 [2024-12-05 02:51:32.198758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.451 passed 00:07:01.451 Test: blockdev copy ...passed 00:07:01.451 Suite: bdevio tests on: Nvme2n1 00:07:01.451 Test: blockdev write read block ...passed 00:07:01.451 Test: blockdev write zeroes read block ...passed 00:07:01.451 Test: blockdev write zeroes read no split ...passed 00:07:01.451 Test: blockdev write zeroes read split ...passed 00:07:01.451 Test: blockdev write zeroes read split partial ...passed 00:07:01.451 Test: blockdev reset ...[2024-12-05 02:51:32.259303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.451 [2024-12-05 02:51:32.263851] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:01.451 00:07:01.451 Test: blockdev write read 8 blocks ...passed 00:07:01.451 Test: blockdev write read size > 128k ...passed 00:07:01.451 Test: blockdev write read invalid size ...passed 00:07:01.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.452 Test: blockdev write read max offset ...passed 00:07:01.452 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.452 Test: blockdev writev readv 8 blocks ...passed 00:07:01.452 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.452 Test: blockdev writev readv block ...passed 00:07:01.452 Test: blockdev writev readv size > 128k ...passed 00:07:01.452 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.452 Test: blockdev comparev and writev ...[2024-12-05 02:51:32.283807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cee34000 len:0x1000 00:07:01.452 [2024-12-05 02:51:32.283953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.452 passed 00:07:01.452 Test: blockdev nvme passthru rw ...passed 00:07:01.452 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:51:32.285981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.452 [2024-12-05 02:51:32.286021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.452 passed 00:07:01.452 Test: blockdev nvme admin passthru ...passed 00:07:01.452 Test: blockdev copy ...passed 00:07:01.452 Suite: bdevio tests on: Nvme1n1p2 00:07:01.452 Test: blockdev write read block ...passed 00:07:01.713 Test: blockdev write zeroes read block ...passed 00:07:01.713 Test: blockdev write zeroes read no split ...passed 00:07:01.713 Test: blockdev write zeroes read split ...passed 00:07:01.713 Test: blockdev write zeroes read split partial ...passed 00:07:01.713 Test: blockdev reset ...[2024-12-05 02:51:32.348946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:01.713 [2024-12-05 02:51:32.353207] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:01.713 passed 00:07:01.713 Test: blockdev write read 8 blocks ...passed 00:07:01.713 Test: blockdev write read size > 128k ...passed 00:07:01.713 Test: blockdev write read invalid size ...passed 00:07:01.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.713 Test: blockdev write read max offset ...passed 00:07:01.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.713 Test: blockdev writev readv 8 blocks ...passed 00:07:01.713 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.713 Test: blockdev writev readv block ...passed 00:07:01.713 Test: blockdev writev readv size > 128k ...passed 00:07:01.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.713 Test: blockdev comparev and writev ...[2024-12-05 02:51:32.373497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cee30000 len:0x1000 00:07:01.713 [2024-12-05 02:51:32.373555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.713 passed 00:07:01.713 Test: blockdev nvme passthru rw ...passed 00:07:01.713 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.713 Test: blockdev nvme admin passthru ...passed 00:07:01.713 Test: blockdev copy ...passed 00:07:01.713 Suite: bdevio tests on: Nvme1n1p1 00:07:01.713 Test: blockdev write read block ...passed 00:07:01.713 Test: blockdev write zeroes read block ...passed 00:07:01.713 Test: blockdev write zeroes read no split ...passed 00:07:01.713 Test: blockdev write zeroes read split ...passed 00:07:01.713 Test: blockdev write zeroes read split partial ...passed 00:07:01.713 Test: blockdev reset ...[2024-12-05 02:51:32.432201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:01.713 [2024-12-05 02:51:32.436309] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:01.713 Test: blockdev write read 8 blocks ...uccessful. 00:07:01.713 passed 00:07:01.713 Test: blockdev write read size > 128k ...passed 00:07:01.713 Test: blockdev write read invalid size ...passed 00:07:01.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.713 Test: blockdev write read max offset ...passed 00:07:01.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.713 Test: blockdev writev readv 8 blocks ...passed 00:07:01.713 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.713 Test: blockdev writev readv block ...passed 00:07:01.713 Test: blockdev writev readv size > 128k ...passed 00:07:01.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.713 Test: blockdev comparev and writev ...[2024-12-05 02:51:32.457830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b360e000 len:0x1000 00:07:01.713 [2024-12-05 02:51:32.457894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.713 passed 00:07:01.713 Test: blockdev nvme passthru rw ...passed 00:07:01.713 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.713 Test: blockdev nvme admin passthru ...passed 00:07:01.713 Test: blockdev copy ...passed 00:07:01.713 Suite: bdevio tests on: Nvme0n1 00:07:01.713 Test: blockdev write read block ...passed 00:07:01.713 Test: blockdev write zeroes read block ...passed 00:07:01.713 Test: blockdev write zeroes read no split ...passed 00:07:01.713 Test: blockdev write zeroes read split ...passed 00:07:01.713 Test: blockdev write zeroes read split partial ...passed 00:07:01.713 Test: blockdev reset ...[2024-12-05 02:51:32.517688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:01.713 [2024-12-05 02:51:32.522134] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:01.713 Test: blockdev write read 8 blocks ...uccessful. 00:07:01.713 passed 00:07:01.713 Test: blockdev write read size > 128k ...passed 00:07:01.713 Test: blockdev write read invalid size ...passed 00:07:01.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.713 Test: blockdev write read max offset ...passed 00:07:01.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.713 Test: blockdev writev readv 8 blocks ...passed 00:07:01.713 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.713 Test: blockdev writev readv block ...passed 00:07:01.713 Test: blockdev writev readv size > 128k ...passed 00:07:01.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.713 Test: blockdev comparev and writev ...passed 00:07:01.713 Test: blockdev nvme passthru rw ...[2024-12-05 02:51:32.540366] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.713 separate metadata which is not supported yet. 00:07:01.713 passed 00:07:01.713 Test: blockdev nvme passthru vendor specific ...[2024-12-05 02:51:32.541651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:01.713 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:01.713 [2024-12-05 02:51:32.541873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.713 passed 00:07:01.713 Test: blockdev copy ...passed 00:07:01.713 00:07:01.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.713 suites 7 7 n/a 0 0 00:07:01.713 tests 161 161 161 0 0 00:07:01.713 asserts 1025 1025 1025 0 n/a 00:07:01.713 00:07:01.713 Elapsed time = 1.423 seconds 00:07:01.976 0 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61361 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61361 ']' 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61361 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61361 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61361' 00:07:01.976 killing process with pid 61361 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61361 00:07:01.976 02:51:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61361 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:02.574 00:07:02.574 real 0m2.326s 00:07:02.574 user 0m5.927s 00:07:02.574 sys 0m0.288s 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.574 ************************************ 00:07:02.574 END TEST bdev_bounds 00:07:02.574 ************************************ 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 02:51:33 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.574 02:51:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:02.574 02:51:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.574 02:51:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 ************************************ 00:07:02.574 START TEST bdev_nbd 00:07:02.574 ************************************ 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:02.574 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:02.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61420 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61420 /var/tmp/spdk-nbd.sock 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61420 ']' 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.575 02:51:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.835 [2024-12-05 02:51:33.466863] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:02.835 [2024-12-05 02:51:33.467183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.835 [2024-12-05 02:51:33.628385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.096 [2024-12-05 02:51:33.763177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.669 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.929 1+0 records in 00:07:03.929 1+0 records out 00:07:03.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126578 s, 3.2 MB/s 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.929 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.188 1+0 records in 00:07:04.188 1+0 records out 00:07:04.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010206 s, 4.0 MB/s 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.188 02:51:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.446 1+0 records in 00:07:04.446 1+0 records out 00:07:04.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672334 s, 6.1 MB/s 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.446 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.447 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.447 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.447 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.447 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.447 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.447 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.704 1+0 records in 00:07:04.704 1+0 records out 00:07:04.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118784 s, 3.4 MB/s 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.704 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.705 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.963 1+0 records in 00:07:04.963 1+0 records out 00:07:04.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719532 s, 5.7 MB/s 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.963 1+0 records in 00:07:04.963 1+0 records out 00:07:04.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105151 s, 3.9 MB/s 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.963 02:51:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.221 1+0 records in 00:07:05.221 1+0 records out 00:07:05.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115315 s, 3.6 MB/s 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.221 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd0", 00:07:05.479 "bdev_name": "Nvme0n1" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd1", 00:07:05.479 "bdev_name": "Nvme1n1p1" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd2", 00:07:05.479 "bdev_name": "Nvme1n1p2" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd3", 00:07:05.479 "bdev_name": "Nvme2n1" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd4", 00:07:05.479 "bdev_name": "Nvme2n2" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd5", 00:07:05.479 "bdev_name": "Nvme2n3" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd6", 00:07:05.479 "bdev_name": "Nvme3n1" 00:07:05.479 } 00:07:05.479 ]' 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd0", 00:07:05.479 "bdev_name": "Nvme0n1" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd1", 00:07:05.479 "bdev_name": "Nvme1n1p1" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd2", 00:07:05.479 "bdev_name": "Nvme1n1p2" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd3", 00:07:05.479 "bdev_name": "Nvme2n1" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd4", 00:07:05.479 "bdev_name": "Nvme2n2" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd5", 00:07:05.479 "bdev_name": "Nvme2n3" 00:07:05.479 }, 00:07:05.479 { 00:07:05.479 "nbd_device": "/dev/nbd6", 00:07:05.479 "bdev_name": "Nvme3n1" 00:07:05.479 } 00:07:05.479 ]' 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.479 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.738 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.996 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.996 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.997 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.255 02:51:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.513 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.772 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.029 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.030 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.030 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.030 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.288 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.288 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.288 02:51:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.288 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:07.547 /dev/nbd0 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.547 1+0 records in 00:07:07.547 1+0 records out 00:07:07.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281725 s, 14.5 MB/s 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.547 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:07.806 /dev/nbd1 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.806 1+0 records in 00:07:07.806 1+0 records out 00:07:07.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553129 s, 7.4 MB/s 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.806 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:08.065 /dev/nbd10 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.065 1+0 records in 00:07:08.065 1+0 records out 00:07:08.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349557 s, 11.7 MB/s 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.065 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:08.323 /dev/nbd11 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.323 1+0 records in 00:07:08.323 1+0 records out 00:07:08.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305396 s, 13.4 MB/s 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.323 02:51:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:08.581 /dev/nbd12 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.581 1+0 records in 00:07:08.581 1+0 records out 00:07:08.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713981 s, 5.7 MB/s 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.581 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:08.581 /dev/nbd13 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.839 1+0 records in 00:07:08.839 1+0 records out 00:07:08.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974276 s, 4.2 MB/s 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:08.839 /dev/nbd14 00:07:08.839 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.097 1+0 records in 00:07:09.097 1+0 records out 00:07:09.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109172 s, 3.8 MB/s 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.097 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.097 { 00:07:09.097 "nbd_device": "/dev/nbd0", 00:07:09.097 "bdev_name": "Nvme0n1" 00:07:09.097 }, 00:07:09.097 { 00:07:09.098 "nbd_device": "/dev/nbd1", 00:07:09.098 "bdev_name": "Nvme1n1p1" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd10", 00:07:09.098 "bdev_name": "Nvme1n1p2" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd11", 00:07:09.098 "bdev_name": "Nvme2n1" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd12", 00:07:09.098 "bdev_name": "Nvme2n2" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd13", 00:07:09.098 "bdev_name": "Nvme2n3" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd14", 00:07:09.098 "bdev_name": "Nvme3n1" 00:07:09.098 } 00:07:09.098 ]' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd0", 00:07:09.098 "bdev_name": "Nvme0n1" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd1", 00:07:09.098 "bdev_name": "Nvme1n1p1" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd10", 00:07:09.098 "bdev_name": "Nvme1n1p2" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd11", 00:07:09.098 "bdev_name": "Nvme2n1" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd12", 00:07:09.098 "bdev_name": "Nvme2n2" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd13", 00:07:09.098 "bdev_name": "Nvme2n3" 00:07:09.098 }, 00:07:09.098 { 00:07:09.098 "nbd_device": "/dev/nbd14", 00:07:09.098 "bdev_name": "Nvme3n1" 00:07:09.098 } 00:07:09.098 ]' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.098 /dev/nbd1 00:07:09.098 /dev/nbd10 00:07:09.098 /dev/nbd11 00:07:09.098 /dev/nbd12 00:07:09.098 /dev/nbd13 00:07:09.098 /dev/nbd14' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.098 /dev/nbd1 00:07:09.098 /dev/nbd10 00:07:09.098 /dev/nbd11 00:07:09.098 /dev/nbd12 00:07:09.098 /dev/nbd13 00:07:09.098 /dev/nbd14' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.098 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.356 256+0 records in 00:07:09.356 256+0 records out 00:07:09.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125252 s, 83.7 MB/s 00:07:09.356 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.356 02:51:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.356 256+0 records in 00:07:09.356 256+0 records out 00:07:09.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177308 s, 5.9 MB/s 00:07:09.356 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.356 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.613 256+0 records in 00:07:09.613 256+0 records out 00:07:09.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178819 s, 5.9 MB/s 00:07:09.613 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.613 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:09.870 256+0 records in 00:07:09.870 256+0 records out 00:07:09.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155358 s, 6.7 MB/s 00:07:09.870 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.870 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:09.870 256+0 records in 00:07:09.870 256+0 records out 00:07:09.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195784 s, 5.4 MB/s 00:07:09.870 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.870 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.127 256+0 records in 00:07:10.127 256+0 records out 00:07:10.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187209 s, 5.6 MB/s 00:07:10.127 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.127 02:51:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.385 256+0 records in 00:07:10.385 256+0 records out 00:07:10.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216969 s, 4.8 MB/s 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:10.385 256+0 records in 00:07:10.385 256+0 records out 00:07:10.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0999428 s, 10.5 MB/s 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.385 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.642 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.899 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.899 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.899 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.900 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.157 02:51:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.414 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.671 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.956 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:12.215 02:51:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:12.470 malloc_lvol_verify 00:07:12.470 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:12.726 0668e3e3-bc6b-4359-a2fb-f213d470627f 00:07:12.726 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:12.983 80530202-71b9-47be-a01b-d3aa1b0cbac5 00:07:12.983 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:13.240 /dev/nbd0 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:13.240 mke2fs 1.47.0 (5-Feb-2023) 00:07:13.240 Discarding device blocks: 0/4096 done 00:07:13.240 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:13.240 00:07:13.240 Allocating group tables: 0/1 done 00:07:13.240 Writing inode tables: 0/1 done 00:07:13.240 Creating journal (1024 blocks): done 00:07:13.240 Writing superblocks and filesystem accounting information: 0/1 done 00:07:13.240 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.240 02:51:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61420 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61420 ']' 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61420 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61420 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.498 killing process with pid 61420 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61420' 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61420 00:07:13.498 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61420 00:07:14.089 02:51:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:14.089 00:07:14.089 real 0m11.507s 00:07:14.089 user 0m15.954s 00:07:14.089 sys 0m3.700s 00:07:14.089 ************************************ 00:07:14.089 END TEST bdev_nbd 00:07:14.089 ************************************ 00:07:14.089 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.089 02:51:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.347 02:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:14.347 02:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:14.347 skipping fio tests on NVMe due to multi-ns failures. 00:07:14.347 02:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:14.347 02:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:14.347 02:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.347 02:51:44 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.347 02:51:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:14.347 02:51:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.347 02:51:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.347 ************************************ 00:07:14.347 START TEST bdev_verify 00:07:14.347 ************************************ 00:07:14.347 02:51:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.347 [2024-12-05 02:51:45.034094] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:14.347 [2024-12-05 02:51:45.034214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:07:14.603 [2024-12-05 02:51:45.192836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.603 [2024-12-05 02:51:45.296963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.603 [2024-12-05 02:51:45.297103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.167 Running I/O for 5 seconds... 00:07:17.466 19200.00 IOPS, 75.00 MiB/s [2024-12-05T02:51:49.240Z] 20160.00 IOPS, 78.75 MiB/s [2024-12-05T02:51:50.173Z] 21205.33 IOPS, 82.83 MiB/s [2024-12-05T02:51:51.109Z] 21888.00 IOPS, 85.50 MiB/s [2024-12-05T02:51:51.109Z] 22131.20 IOPS, 86.45 MiB/s 00:07:20.265 Latency(us) 00:07:20.265 [2024-12-05T02:51:51.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.265 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x0 length 0xbd0bd 00:07:20.265 Nvme0n1 : 5.05 1545.05 6.04 0.00 0.00 82644.22 17241.01 98001.53 00:07:20.265 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:20.265 Nvme0n1 : 5.06 1581.12 6.18 0.00 0.00 80584.48 7309.78 91145.45 00:07:20.265 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x0 length 0x4ff80 00:07:20.265 Nvme1n1p1 : 5.06 1544.09 6.03 0.00 0.00 82548.92 18249.26 93161.94 00:07:20.265 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:20.265 Nvme1n1p1 : 5.08 1587.24 6.20 0.00 0.00 80134.90 13510.50 79046.50 00:07:20.265 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x0 length 0x4ff7f 00:07:20.265 Nvme1n1p2 : 5.06 1543.61 6.03 0.00 0.00 82441.22 17140.18 90742.15 00:07:20.265 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:20.265 Nvme1n1p2 : 5.08 1586.76 6.20 0.00 0.00 79971.40 13611.32 69367.34 00:07:20.265 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x0 length 0x80000 00:07:20.265 Nvme2n1 : 5.06 1543.16 6.03 0.00 0.00 82296.11 17039.36 87515.77 00:07:20.265 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x80000 length 0x80000 00:07:20.265 Nvme2n1 : 5.09 1585.82 6.19 0.00 0.00 79876.73 15224.52 67350.84 00:07:20.265 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x0 length 0x80000 00:07:20.265 Nvme2n2 : 5.06 1541.95 6.02 0.00 0.00 82177.41 16232.76 89935.56 00:07:20.265 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.265 Verification LBA range: start 0x80000 length 0x80000 00:07:20.266 Nvme2n2 : 5.09 1585.41 6.19 0.00 0.00 79747.22 14720.39 68560.74 00:07:20.266 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.266 Verification LBA range: start 0x0 length 0x80000 00:07:20.266 Nvme2n3 : 5.08 1551.11 6.06 0.00 0.00 81542.75 3251.59 93565.24 00:07:20.266 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.266 Verification LBA range: start 0x80000 length 0x80000 00:07:20.266 Nvme2n3 : 5.09 1585.00 6.19 0.00 0.00 79585.89 14317.10 70577.23 00:07:20.266 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.266 Verification LBA range: start 0x0 length 0x20000 00:07:20.266 Nvme3n1 : 5.08 1561.00 6.10 0.00 0.00 80910.49 6452.78 98808.12 00:07:20.266 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.266 Verification LBA range: start 0x20000 length 0x20000 00:07:20.266 Nvme3n1 : 5.09 1584.60 6.19 0.00 0.00 79498.42 13611.32 73400.32 00:07:20.266 [2024-12-05T02:51:51.110Z] =================================================================================================================== 00:07:20.266 [2024-12-05T02:51:51.110Z] Total : 21925.92 85.65 0.00 0.00 80980.16 3251.59 98808.12 00:07:21.643 00:07:21.643 real 0m7.259s 00:07:21.643 user 0m13.629s 00:07:21.643 sys 0m0.203s 00:07:21.643 02:51:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.643 02:51:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:21.643 ************************************ 00:07:21.643 END TEST bdev_verify 00:07:21.643 ************************************ 00:07:21.643 02:51:52 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:21.643 02:51:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:21.643 02:51:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.643 02:51:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.643 ************************************ 00:07:21.643 START TEST bdev_verify_big_io 00:07:21.643 ************************************ 00:07:21.643 02:51:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:21.643 [2024-12-05 02:51:52.326544] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:21.643 [2024-12-05 02:51:52.326661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61932 ] 00:07:21.901 [2024-12-05 02:51:52.487334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.901 [2024-12-05 02:51:52.585232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.901 [2024-12-05 02:51:52.585308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.467 Running I/O for 5 seconds... 00:07:28.323 314.00 IOPS, 19.62 MiB/s [2024-12-05T02:51:59.733Z] 2870.50 IOPS, 179.41 MiB/s [2024-12-05T02:51:59.733Z] 3280.67 IOPS, 205.04 MiB/s 00:07:28.889 Latency(us) 00:07:28.889 [2024-12-05T02:51:59.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.889 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0xbd0b 00:07:28.889 Nvme0n1 : 6.22 79.73 4.98 0.00 0.00 1498477.08 16938.54 2116510.33 00:07:28.889 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:28.889 Nvme0n1 : 5.85 102.03 6.38 0.00 0.00 1185259.30 19559.98 1413157.81 00:07:28.889 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0x4ff8 00:07:28.889 Nvme1n1p1 : 6.00 110.85 6.93 0.00 0.00 1060049.69 88322.36 1213121.77 00:07:28.889 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:28.889 Nvme1n1p1 : 5.85 101.31 6.33 0.00 0.00 1147560.61 96791.63 1309913.40 00:07:28.889 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0x4ff7 00:07:28.889 Nvme1n1p2 : 6.07 116.02 7.25 0.00 0.00 993544.66 65737.65 1013085.74 00:07:28.889 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:28.889 Nvme1n1p2 : 5.99 107.43 6.71 0.00 0.00 1046902.06 163739.18 1167952.34 00:07:28.889 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0x8000 00:07:28.889 Nvme2n1 : 6.18 119.91 7.49 0.00 0.00 928551.98 68560.74 980821.86 00:07:28.889 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x8000 length 0x8000 00:07:28.889 Nvme2n1 : 6.07 116.25 7.27 0.00 0.00 957682.04 37305.11 1193763.45 00:07:28.889 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0x8000 00:07:28.889 Nvme2n2 : 6.14 112.04 7.00 0.00 0.00 960694.16 68560.74 1729343.80 00:07:28.889 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x8000 length 0x8000 00:07:28.889 Nvme2n2 : 6.12 122.36 7.65 0.00 0.00 880814.74 38313.35 1219574.55 00:07:28.889 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0x8000 00:07:28.889 Nvme2n3 : 6.23 119.68 7.48 0.00 0.00 867938.08 42951.29 1755154.90 00:07:28.889 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x8000 length 0x8000 00:07:28.889 Nvme2n3 : 6.15 128.68 8.04 0.00 0.00 809422.12 23693.78 1238932.87 00:07:28.889 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x0 length 0x2000 00:07:28.889 Nvme3n1 : 6.31 138.81 8.68 0.00 0.00 726054.65 661.66 1780966.01 00:07:28.889 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.889 Verification LBA range: start 0x2000 length 0x2000 00:07:28.889 Nvme3n1 : 6.28 159.46 9.97 0.00 0.00 634097.10 140.21 1271196.75 00:07:28.889 [2024-12-05T02:51:59.733Z] =================================================================================================================== 00:07:28.889 [2024-12-05T02:51:59.733Z] Total : 1634.55 102.16 0.00 0.00 945621.40 140.21 2116510.33 00:07:30.787 00:07:30.787 real 0m8.853s 00:07:30.787 user 0m16.776s 00:07:30.787 sys 0m0.242s 00:07:30.787 02:52:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.787 02:52:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:30.787 ************************************ 00:07:30.787 END TEST bdev_verify_big_io 00:07:30.787 ************************************ 00:07:30.787 02:52:01 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.787 02:52:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:30.787 02:52:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.787 02:52:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.787 ************************************ 00:07:30.787 START TEST bdev_write_zeroes 00:07:30.787 ************************************ 00:07:30.787 02:52:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.787 [2024-12-05 02:52:01.220873] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:30.787 [2024-12-05 02:52:01.220997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62047 ] 00:07:30.787 [2024-12-05 02:52:01.372770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.787 [2024-12-05 02:52:01.474381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.353 Running I/O for 1 seconds... 00:07:32.288 72128.00 IOPS, 281.75 MiB/s 00:07:32.288 Latency(us) 00:07:32.288 [2024-12-05T02:52:03.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.288 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme0n1 : 1.02 10251.54 40.05 0.00 0.00 12458.32 9023.80 24702.03 00:07:32.288 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme1n1p1 : 1.03 10239.10 40.00 0.00 0.00 12455.38 9729.58 24298.73 00:07:32.288 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme1n1p2 : 1.03 10226.62 39.95 0.00 0.00 12444.27 9527.93 23592.96 00:07:32.288 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme2n1 : 1.03 10215.05 39.90 0.00 0.00 12439.22 9074.22 22786.36 00:07:32.288 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme2n2 : 1.03 10203.60 39.86 0.00 0.00 12427.28 8267.62 22383.06 00:07:32.288 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme2n3 : 1.03 10192.17 39.81 0.00 0.00 12420.08 7813.91 23088.84 00:07:32.288 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.288 Nvme3n1 : 1.03 10180.77 39.77 0.00 0.00 12414.06 7309.78 24702.03 00:07:32.288 [2024-12-05T02:52:03.132Z] =================================================================================================================== 00:07:32.288 [2024-12-05T02:52:03.132Z] Total : 71508.85 279.33 0.00 0.00 12436.94 7309.78 24702.03 00:07:33.223 00:07:33.223 real 0m2.692s 00:07:33.223 user 0m2.401s 00:07:33.223 sys 0m0.178s 00:07:33.223 02:52:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.223 02:52:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:33.223 ************************************ 00:07:33.223 END TEST bdev_write_zeroes 00:07:33.223 ************************************ 00:07:33.223 02:52:03 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.223 02:52:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.223 02:52:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.223 02:52:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.223 ************************************ 00:07:33.223 START TEST bdev_json_nonenclosed 00:07:33.223 ************************************ 00:07:33.223 02:52:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.223 [2024-12-05 02:52:03.984088] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:33.223 [2024-12-05 02:52:03.984229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62100 ] 00:07:33.482 [2024-12-05 02:52:04.142929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.482 [2024-12-05 02:52:04.241329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.482 [2024-12-05 02:52:04.241403] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:33.482 [2024-12-05 02:52:04.241420] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:33.482 [2024-12-05 02:52:04.241429] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.740 00:07:33.740 real 0m0.496s 00:07:33.740 user 0m0.305s 00:07:33.740 sys 0m0.086s 00:07:33.740 ************************************ 00:07:33.740 END TEST bdev_json_nonenclosed 00:07:33.740 ************************************ 00:07:33.740 02:52:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.740 02:52:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:33.740 02:52:04 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.740 02:52:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.740 02:52:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.740 02:52:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.740 ************************************ 00:07:33.740 START TEST bdev_json_nonarray 00:07:33.740 ************************************ 00:07:33.740 02:52:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.740 [2024-12-05 02:52:04.545507] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:33.740 [2024-12-05 02:52:04.545626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62120 ] 00:07:33.998 [2024-12-05 02:52:04.703618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.998 [2024-12-05 02:52:04.801477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.998 [2024-12-05 02:52:04.801563] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:33.998 [2024-12-05 02:52:04.801580] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:33.999 [2024-12-05 02:52:04.801589] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.257 00:07:34.257 real 0m0.496s 00:07:34.257 user 0m0.308s 00:07:34.257 sys 0m0.084s 00:07:34.257 ************************************ 00:07:34.257 END TEST bdev_json_nonarray 00:07:34.257 ************************************ 00:07:34.257 02:52:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.257 02:52:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 02:52:05 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:34.257 02:52:05 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:34.257 02:52:05 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:34.257 02:52:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.257 02:52:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.257 02:52:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 ************************************ 00:07:34.257 START TEST bdev_gpt_uuid 00:07:34.257 ************************************ 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62151 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62151 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62151 ']' 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:34.515 [2024-12-05 02:52:05.126516] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:34.515 [2024-12-05 02:52:05.126946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62151 ] 00:07:34.515 [2024-12-05 02:52:05.289840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.773 [2024-12-05 02:52:05.390086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.340 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.340 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:35.340 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.340 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.340 02:52:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 Some configs were skipped because the RPC state that can call them passed over. 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:35.599 { 00:07:35.599 "name": "Nvme1n1p1", 00:07:35.599 "aliases": [ 00:07:35.599 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:35.599 ], 00:07:35.599 "product_name": "GPT Disk", 00:07:35.599 "block_size": 4096, 00:07:35.599 "num_blocks": 655104, 00:07:35.599 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:35.599 "assigned_rate_limits": { 00:07:35.599 "rw_ios_per_sec": 0, 00:07:35.599 "rw_mbytes_per_sec": 0, 00:07:35.599 "r_mbytes_per_sec": 0, 00:07:35.599 "w_mbytes_per_sec": 0 00:07:35.599 }, 00:07:35.599 "claimed": false, 00:07:35.599 "zoned": false, 00:07:35.599 "supported_io_types": { 00:07:35.599 "read": true, 00:07:35.599 "write": true, 00:07:35.599 "unmap": true, 00:07:35.599 "flush": true, 00:07:35.599 "reset": true, 00:07:35.599 "nvme_admin": false, 00:07:35.599 "nvme_io": false, 00:07:35.599 "nvme_io_md": false, 00:07:35.599 "write_zeroes": true, 00:07:35.599 "zcopy": false, 00:07:35.599 "get_zone_info": false, 00:07:35.599 "zone_management": false, 00:07:35.599 "zone_append": false, 00:07:35.599 "compare": true, 00:07:35.599 "compare_and_write": false, 00:07:35.599 "abort": true, 00:07:35.599 "seek_hole": false, 00:07:35.599 "seek_data": false, 00:07:35.599 "copy": true, 00:07:35.599 "nvme_iov_md": false 00:07:35.599 }, 00:07:35.599 "driver_specific": { 00:07:35.599 "gpt": { 00:07:35.599 "base_bdev": "Nvme1n1", 00:07:35.599 "offset_blocks": 256, 00:07:35.599 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:35.599 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:35.599 "partition_name": "SPDK_TEST_first" 00:07:35.599 } 00:07:35.599 } 00:07:35.599 } 00:07:35.599 ]' 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.599 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.858 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.858 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:35.858 { 00:07:35.858 "name": "Nvme1n1p2", 00:07:35.858 "aliases": [ 00:07:35.858 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:35.858 ], 00:07:35.858 "product_name": "GPT Disk", 00:07:35.858 "block_size": 4096, 00:07:35.858 "num_blocks": 655103, 00:07:35.858 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:35.858 "assigned_rate_limits": { 00:07:35.858 "rw_ios_per_sec": 0, 00:07:35.858 "rw_mbytes_per_sec": 0, 00:07:35.858 "r_mbytes_per_sec": 0, 00:07:35.858 "w_mbytes_per_sec": 0 00:07:35.858 }, 00:07:35.858 "claimed": false, 00:07:35.858 "zoned": false, 00:07:35.859 "supported_io_types": { 00:07:35.859 "read": true, 00:07:35.859 "write": true, 00:07:35.859 "unmap": true, 00:07:35.859 "flush": true, 00:07:35.859 "reset": true, 00:07:35.859 "nvme_admin": false, 00:07:35.859 "nvme_io": false, 00:07:35.859 "nvme_io_md": false, 00:07:35.859 "write_zeroes": true, 00:07:35.859 "zcopy": false, 00:07:35.859 "get_zone_info": false, 00:07:35.859 "zone_management": false, 00:07:35.859 "zone_append": false, 00:07:35.859 "compare": true, 00:07:35.859 "compare_and_write": false, 00:07:35.859 "abort": true, 00:07:35.859 "seek_hole": false, 00:07:35.859 "seek_data": false, 00:07:35.859 "copy": true, 00:07:35.859 "nvme_iov_md": false 00:07:35.859 }, 00:07:35.859 "driver_specific": { 00:07:35.859 "gpt": { 00:07:35.859 "base_bdev": "Nvme1n1", 00:07:35.859 "offset_blocks": 655360, 00:07:35.859 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:35.859 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:35.859 "partition_name": "SPDK_TEST_second" 00:07:35.859 } 00:07:35.859 } 00:07:35.859 } 00:07:35.859 ]' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62151 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62151 ']' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62151 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62151 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.859 killing process with pid 62151 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62151' 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62151 00:07:35.859 02:52:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62151 00:07:37.236 00:07:37.236 real 0m3.022s 00:07:37.236 user 0m3.150s 00:07:37.236 sys 0m0.375s 00:07:37.236 02:52:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.236 ************************************ 00:07:37.236 END TEST bdev_gpt_uuid 00:07:37.236 02:52:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:37.236 ************************************ 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:37.495 02:52:08 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:37.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.754 Waiting for block devices as requested 00:07:38.013 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.013 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.013 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.272 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.631 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:43.631 02:52:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:43.631 02:52:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:43.631 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:43.631 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:43.631 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:43.631 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:43.631 02:52:14 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:43.631 00:07:43.631 real 0m57.247s 00:07:43.631 user 1m13.028s 00:07:43.631 sys 0m7.973s 00:07:43.631 ************************************ 00:07:43.631 END TEST blockdev_nvme_gpt 00:07:43.631 ************************************ 00:07:43.631 02:52:14 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.631 02:52:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.631 02:52:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:43.631 02:52:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.631 02:52:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.631 02:52:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.631 ************************************ 00:07:43.631 START TEST nvme 00:07:43.631 ************************************ 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:43.631 * Looking for test storage... 00:07:43.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.631 02:52:14 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.631 02:52:14 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.631 02:52:14 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.631 02:52:14 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.631 02:52:14 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.631 02:52:14 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:43.631 02:52:14 nvme -- scripts/common.sh@345 -- # : 1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.631 02:52:14 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.631 02:52:14 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@353 -- # local d=1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.631 02:52:14 nvme -- scripts/common.sh@355 -- # echo 1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.631 02:52:14 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@353 -- # local d=2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.631 02:52:14 nvme -- scripts/common.sh@355 -- # echo 2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.631 02:52:14 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.631 02:52:14 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.631 02:52:14 nvme -- scripts/common.sh@368 -- # return 0 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.631 --rc genhtml_branch_coverage=1 00:07:43.631 --rc genhtml_function_coverage=1 00:07:43.631 --rc genhtml_legend=1 00:07:43.631 --rc geninfo_all_blocks=1 00:07:43.631 --rc geninfo_unexecuted_blocks=1 00:07:43.631 00:07:43.631 ' 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.631 --rc genhtml_branch_coverage=1 00:07:43.631 --rc genhtml_function_coverage=1 00:07:43.631 --rc genhtml_legend=1 00:07:43.631 --rc geninfo_all_blocks=1 00:07:43.631 --rc geninfo_unexecuted_blocks=1 00:07:43.631 00:07:43.631 ' 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.631 --rc genhtml_branch_coverage=1 00:07:43.631 --rc genhtml_function_coverage=1 00:07:43.631 --rc genhtml_legend=1 00:07:43.631 --rc geninfo_all_blocks=1 00:07:43.631 --rc geninfo_unexecuted_blocks=1 00:07:43.631 00:07:43.631 ' 00:07:43.631 02:52:14 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.631 --rc genhtml_branch_coverage=1 00:07:43.631 --rc genhtml_function_coverage=1 00:07:43.631 --rc genhtml_legend=1 00:07:43.631 --rc geninfo_all_blocks=1 00:07:43.631 --rc geninfo_unexecuted_blocks=1 00:07:43.631 00:07:43.631 ' 00:07:43.631 02:52:14 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:44.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:44.759 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.759 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.759 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.759 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.759 02:52:15 nvme -- nvme/nvme.sh@79 -- # uname 00:07:44.759 02:52:15 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:44.759 02:52:15 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:44.759 02:52:15 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:44.759 Waiting for stub to ready for secondary processes... 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1075 -- # stubpid=62785 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62785 ]] 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:44.759 02:52:15 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:44.759 [2024-12-05 02:52:15.500735] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:44.759 [2024-12-05 02:52:15.500826] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:45.689 [2024-12-05 02:52:16.216924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.689 [2024-12-05 02:52:16.312278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.689 [2024-12-05 02:52:16.312424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.689 [2024-12-05 02:52:16.312499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.689 [2024-12-05 02:52:16.326234] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:45.689 [2024-12-05 02:52:16.326268] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.689 [2024-12-05 02:52:16.341166] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:45.689 [2024-12-05 02:52:16.341371] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:45.689 [2024-12-05 02:52:16.345513] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.689 [2024-12-05 02:52:16.345843] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:45.689 [2024-12-05 02:52:16.345940] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:45.689 [2024-12-05 02:52:16.349478] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.689 [2024-12-05 02:52:16.349639] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:45.689 [2024-12-05 02:52:16.349685] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:45.689 [2024-12-05 02:52:16.351491] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:45.689 [2024-12-05 02:52:16.351646] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:45.689 [2024-12-05 02:52:16.351695] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:45.689 [2024-12-05 02:52:16.351729] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:45.689 [2024-12-05 02:52:16.351758] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:45.689 02:52:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:45.689 02:52:16 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:45.689 done. 00:07:45.689 02:52:16 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:45.689 02:52:16 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:45.689 02:52:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.689 02:52:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.689 ************************************ 00:07:45.689 START TEST nvme_reset 00:07:45.689 ************************************ 00:07:45.689 02:52:16 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:45.947 Initializing NVMe Controllers 00:07:45.947 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:45.947 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:45.947 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:45.947 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:45.947 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:45.947 00:07:45.947 real 0m0.210s 00:07:45.947 user 0m0.064s 00:07:45.947 sys 0m0.104s 00:07:45.947 02:52:16 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.947 02:52:16 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:45.947 ************************************ 00:07:45.947 END TEST nvme_reset 00:07:45.947 ************************************ 00:07:45.947 02:52:16 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:45.947 02:52:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.947 02:52:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.947 02:52:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.947 ************************************ 00:07:45.947 START TEST nvme_identify 00:07:45.947 ************************************ 00:07:45.947 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:45.947 02:52:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:45.947 02:52:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:45.947 02:52:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:45.947 02:52:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:45.947 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:45.947 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:45.948 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:45.948 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:45.948 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:46.210 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:46.210 02:52:16 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.210 02:52:16 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:46.210 [2024-12-05 02:52:16.985472] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62806 terminated unexpected 00:07:46.210 ===================================================== 00:07:46.210 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.210 ===================================================== 00:07:46.210 Controller Capabilities/Features 00:07:46.210 ================================ 00:07:46.210 Vendor ID: 1b36 00:07:46.210 Subsystem Vendor ID: 1af4 00:07:46.210 Serial Number: 12341 00:07:46.210 Model Number: QEMU NVMe Ctrl 00:07:46.210 Firmware Version: 8.0.0 00:07:46.210 Recommended Arb Burst: 6 00:07:46.210 IEEE OUI Identifier: 00 54 52 00:07:46.210 Multi-path I/O 00:07:46.210 May have multiple subsystem ports: No 00:07:46.210 May have multiple controllers: No 00:07:46.210 Associated with SR-IOV VF: No 00:07:46.210 Max Data Transfer Size: 524288 00:07:46.210 Max Number of Namespaces: 256 00:07:46.210 Max Number of I/O Queues: 64 00:07:46.210 NVMe Specification Version (VS): 1.4 00:07:46.210 NVMe Specification Version (Identify): 1.4 00:07:46.210 Maximum Queue Entries: 2048 00:07:46.210 Contiguous Queues Required: Yes 00:07:46.210 Arbitration Mechanisms Supported 00:07:46.210 Weighted Round Robin: Not Supported 00:07:46.210 Vendor Specific: Not Supported 00:07:46.210 Reset Timeout: 7500 ms 00:07:46.210 Doorbell Stride: 4 bytes 00:07:46.210 NVM Subsystem Reset: Not Supported 00:07:46.210 Command Sets Supported 00:07:46.210 NVM Command Set: Supported 00:07:46.210 Boot Partition: Not Supported 00:07:46.210 Memory Page Size Minimum: 4096 bytes 00:07:46.211 Memory Page Size Maximum: 65536 bytes 00:07:46.211 Persistent Memory Region: Not Supported 00:07:46.211 Optional Asynchronous Events Supported 00:07:46.211 Namespace Attribute Notices: Supported 00:07:46.211 Firmware Activation Notices: Not Supported 00:07:46.211 ANA Change Notices: Not Supported 00:07:46.211 PLE Aggregate Log Change Notices: Not Supported 00:07:46.211 LBA Status Info Alert Notices: Not Supported 00:07:46.211 EGE Aggregate Log Change Notices: Not Supported 00:07:46.211 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.211 Zone Descriptor Change Notices: Not Supported 00:07:46.211 Discovery Log Change Notices: Not Supported 00:07:46.211 Controller Attributes 00:07:46.211 128-bit Host Identifier: Not Supported 00:07:46.211 Non-Operational Permissive Mode: Not Supported 00:07:46.211 NVM Sets: Not Supported 00:07:46.211 Read Recovery Levels: Not Supported 00:07:46.211 Endurance Groups: Not Supported 00:07:46.211 Predictable Latency Mode: Not Supported 00:07:46.211 Traffic Based Keep ALive: Not Supported 00:07:46.211 Namespace Granularity: Not Supported 00:07:46.211 SQ Associations: Not Supported 00:07:46.211 UUID List: Not Supported 00:07:46.211 Multi-Domain Subsystem: Not Supported 00:07:46.211 Fixed Capacity Management: Not Supported 00:07:46.211 Variable Capacity Management: Not Supported 00:07:46.211 Delete Endurance Group: Not Supported 00:07:46.211 Delete NVM Set: Not Supported 00:07:46.211 Extended LBA Formats Supported: Supported 00:07:46.211 Flexible Data Placement Supported: Not Supported 00:07:46.211 00:07:46.211 Controller Memory Buffer Support 00:07:46.211 ================================ 00:07:46.211 Supported: No 00:07:46.211 00:07:46.211 Persistent Memory Region Support 00:07:46.211 ================================ 00:07:46.211 Supported: No 00:07:46.211 00:07:46.211 Admin Command Set Attributes 00:07:46.211 ============================ 00:07:46.211 Security Send/Receive: Not Supported 00:07:46.211 Format NVM: Supported 00:07:46.211 Firmware Activate/Download: Not Supported 00:07:46.211 Namespace Management: Supported 00:07:46.211 Device Self-Test: Not Supported 00:07:46.211 Directives: Supported 00:07:46.211 NVMe-MI: Not Supported 00:07:46.211 Virtualization Management: Not Supported 00:07:46.211 Doorbell Buffer Config: Supported 00:07:46.211 Get LBA Status Capability: Not Supported 00:07:46.211 Command & Feature Lockdown Capability: Not Supported 00:07:46.211 Abort Command Limit: 4 00:07:46.211 Async Event Request Limit: 4 00:07:46.211 Number of Firmware Slots: N/A 00:07:46.211 Firmware Slot 1 Read-Only: N/A 00:07:46.211 Firmware Activation Without Reset: N/A 00:07:46.211 Multiple Update Detection Support: N/A 00:07:46.211 Firmware Update Granularity: No Information Provided 00:07:46.211 Per-Namespace SMART Log: Yes 00:07:46.211 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.211 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:46.211 Command Effects Log Page: Supported 00:07:46.211 Get Log Page Extended Data: Supported 00:07:46.211 Telemetry Log Pages: Not Supported 00:07:46.211 Persistent Event Log Pages: Not Supported 00:07:46.211 Supported Log Pages Log Page: May Support 00:07:46.211 Commands Supported & Effects Log Page: Not Supported 00:07:46.211 Feature Identifiers & Effects Log Page:May Support 00:07:46.211 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.211 Data Area 4 for Telemetry Log: Not Supported 00:07:46.211 Error Log Page Entries Supported: 1 00:07:46.211 Keep Alive: Not Supported 00:07:46.211 00:07:46.211 NVM Command Set Attributes 00:07:46.211 ========================== 00:07:46.211 Submission Queue Entry Size 00:07:46.211 Max: 64 00:07:46.211 Min: 64 00:07:46.211 Completion Queue Entry Size 00:07:46.211 Max: 16 00:07:46.211 Min: 16 00:07:46.211 Number of Namespaces: 256 00:07:46.211 Compare Command: Supported 00:07:46.211 Write Uncorrectable Command: Not Supported 00:07:46.211 Dataset Management Command: Supported 00:07:46.211 Write Zeroes Command: Supported 00:07:46.211 Set Features Save Field: Supported 00:07:46.211 Reservations: Not Supported 00:07:46.211 Timestamp: Supported 00:07:46.211 Copy: Supported 00:07:46.211 Volatile Write Cache: Present 00:07:46.211 Atomic Write Unit (Normal): 1 00:07:46.211 Atomic Write Unit (PFail): 1 00:07:46.211 Atomic Compare & Write Unit: 1 00:07:46.211 Fused Compare & Write: Not Supported 00:07:46.211 Scatter-Gather List 00:07:46.211 SGL Command Set: Supported 00:07:46.211 SGL Keyed: Not Supported 00:07:46.211 SGL Bit Bucket Descriptor: Not Supported 00:07:46.211 SGL Metadata Pointer: Not Supported 00:07:46.211 Oversized SGL: Not Supported 00:07:46.211 SGL Metadata Address: Not Supported 00:07:46.211 SGL Offset: Not Supported 00:07:46.211 Transport SGL Data Block: Not Supported 00:07:46.211 Replay Protected Memory Block: Not Supported 00:07:46.211 00:07:46.211 Firmware Slot Information 00:07:46.211 ========================= 00:07:46.211 Active slot: 1 00:07:46.211 Slot 1 Firmware Revision: 1.0 00:07:46.211 00:07:46.211 00:07:46.211 Commands Supported and Effects 00:07:46.211 ============================== 00:07:46.211 Admin Commands 00:07:46.211 -------------- 00:07:46.211 Delete I/O Submission Queue (00h): Supported 00:07:46.211 Create I/O Submission Queue (01h): Supported 00:07:46.211 Get Log Page (02h): Supported 00:07:46.211 Delete I/O Completion Queue (04h): Supported 00:07:46.211 Create I/O Completion Queue (05h): Supported 00:07:46.211 Identify (06h): Supported 00:07:46.211 Abort (08h): Supported 00:07:46.211 Set Features (09h): Supported 00:07:46.211 Get Features (0Ah): Supported 00:07:46.211 Asynchronous Event Request (0Ch): Supported 00:07:46.211 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.211 Directive Send (19h): Supported 00:07:46.211 Directive Receive (1Ah): Supported 00:07:46.211 Virtualization Management (1Ch): Supported 00:07:46.211 Doorbell Buffer Config (7Ch): Supported 00:07:46.211 Format NVM (80h): Supported LBA-Change 00:07:46.211 I/O Commands 00:07:46.211 ------------ 00:07:46.211 Flush (00h): Supported LBA-Change 00:07:46.211 Write (01h): Supported LBA-Change 00:07:46.211 Read (02h): Supported 00:07:46.211 Compare (05h): Supported 00:07:46.211 Write Zeroes (08h): Supported LBA-Change 00:07:46.211 Dataset Management (09h): Supported LBA-Change 00:07:46.211 Unknown (0Ch): Supported 00:07:46.211 Unknown (12h): Supported 00:07:46.211 Copy (19h): Supported LBA-Change 00:07:46.211 Unknown (1Dh): Supported LBA-Change 00:07:46.211 00:07:46.211 Error Log 00:07:46.211 ========= 00:07:46.211 00:07:46.211 Arbitration 00:07:46.211 =========== 00:07:46.211 Arbitration Burst: no limit 00:07:46.211 00:07:46.211 Power Management 00:07:46.211 ================ 00:07:46.211 Number of Power States: 1 00:07:46.211 Current Power State: Power State #0 00:07:46.211 Power State #0: 00:07:46.211 Max Power: 25.00 W 00:07:46.211 Non-Operational State: Operational 00:07:46.211 Entry Latency: 16 microseconds 00:07:46.211 Exit Latency: 4 microseconds 00:07:46.211 Relative Read Throughput: 0 00:07:46.211 Relative Read Latency: 0 00:07:46.211 Relative Write Throughput: 0 00:07:46.211 Relative Write Latency: 0 00:07:46.211 Idle Power[2024-12-05 02:52:16.988248] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62806 terminated unexpected 00:07:46.211 : Not Reported 00:07:46.211 Active Power: Not Reported 00:07:46.211 Non-Operational Permissive Mode: Not Supported 00:07:46.211 00:07:46.211 Health Information 00:07:46.211 ================== 00:07:46.211 Critical Warnings: 00:07:46.211 Available Spare Space: OK 00:07:46.211 Temperature: OK 00:07:46.211 Device Reliability: OK 00:07:46.211 Read Only: No 00:07:46.211 Volatile Memory Backup: OK 00:07:46.211 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.211 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.211 Available Spare: 0% 00:07:46.211 Available Spare Threshold: 0% 00:07:46.211 Life Percentage Used: 0% 00:07:46.211 Data Units Read: 1040 00:07:46.211 Data Units Written: 906 00:07:46.211 Host Read Commands: 53479 00:07:46.211 Host Write Commands: 52270 00:07:46.211 Controller Busy Time: 0 minutes 00:07:46.211 Power Cycles: 0 00:07:46.211 Power On Hours: 0 hours 00:07:46.211 Unsafe Shutdowns: 0 00:07:46.211 Unrecoverable Media Errors: 0 00:07:46.211 Lifetime Error Log Entries: 0 00:07:46.211 Warning Temperature Time: 0 minutes 00:07:46.212 Critical Temperature Time: 0 minutes 00:07:46.212 00:07:46.212 Number of Queues 00:07:46.212 ================ 00:07:46.212 Number of I/O Submission Queues: 64 00:07:46.212 Number of I/O Completion Queues: 64 00:07:46.212 00:07:46.212 ZNS Specific Controller Data 00:07:46.212 ============================ 00:07:46.212 Zone Append Size Limit: 0 00:07:46.212 00:07:46.212 00:07:46.212 Active Namespaces 00:07:46.212 ================= 00:07:46.212 Namespace ID:1 00:07:46.212 Error Recovery Timeout: Unlimited 00:07:46.212 Command Set Identifier: NVM (00h) 00:07:46.212 Deallocate: Supported 00:07:46.212 Deallocated/Unwritten Error: Supported 00:07:46.212 Deallocated Read Value: All 0x00 00:07:46.212 Deallocate in Write Zeroes: Not Supported 00:07:46.212 Deallocated Guard Field: 0xFFFF 00:07:46.212 Flush: Supported 00:07:46.212 Reservation: Not Supported 00:07:46.212 Namespace Sharing Capabilities: Private 00:07:46.212 Size (in LBAs): 1310720 (5GiB) 00:07:46.212 Capacity (in LBAs): 1310720 (5GiB) 00:07:46.212 Utilization (in LBAs): 1310720 (5GiB) 00:07:46.212 Thin Provisioning: Not Supported 00:07:46.212 Per-NS Atomic Units: No 00:07:46.212 Maximum Single Source Range Length: 128 00:07:46.212 Maximum Copy Length: 128 00:07:46.212 Maximum Source Range Count: 128 00:07:46.212 NGUID/EUI64 Never Reused: No 00:07:46.212 Namespace Write Protected: No 00:07:46.212 Number of LBA Formats: 8 00:07:46.212 Current LBA Format: LBA Format #04 00:07:46.212 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.212 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.212 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.212 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.212 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.212 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.212 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.212 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.212 00:07:46.212 NVM Specific Namespace Data 00:07:46.212 =========================== 00:07:46.212 Logical Block Storage Tag Mask: 0 00:07:46.212 Protection Information Capabilities: 00:07:46.212 16b Guard Protection Information Storage Tag Support: No 00:07:46.212 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.212 Storage Tag Check Read Support: No 00:07:46.212 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.212 ===================================================== 00:07:46.212 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.212 ===================================================== 00:07:46.212 Controller Capabilities/Features 00:07:46.212 ================================ 00:07:46.212 Vendor ID: 1b36 00:07:46.212 Subsystem Vendor ID: 1af4 00:07:46.212 Serial Number: 12343 00:07:46.212 Model Number: QEMU NVMe Ctrl 00:07:46.212 Firmware Version: 8.0.0 00:07:46.212 Recommended Arb Burst: 6 00:07:46.212 IEEE OUI Identifier: 00 54 52 00:07:46.212 Multi-path I/O 00:07:46.212 May have multiple subsystem ports: No 00:07:46.212 May have multiple controllers: Yes 00:07:46.212 Associated with SR-IOV VF: No 00:07:46.212 Max Data Transfer Size: 524288 00:07:46.212 Max Number of Namespaces: 256 00:07:46.212 Max Number of I/O Queues: 64 00:07:46.212 NVMe Specification Version (VS): 1.4 00:07:46.212 NVMe Specification Version (Identify): 1.4 00:07:46.212 Maximum Queue Entries: 2048 00:07:46.212 Contiguous Queues Required: Yes 00:07:46.212 Arbitration Mechanisms Supported 00:07:46.212 Weighted Round Robin: Not Supported 00:07:46.212 Vendor Specific: Not Supported 00:07:46.212 Reset Timeout: 7500 ms 00:07:46.212 Doorbell Stride: 4 bytes 00:07:46.212 NVM Subsystem Reset: Not Supported 00:07:46.212 Command Sets Supported 00:07:46.212 NVM Command Set: Supported 00:07:46.212 Boot Partition: Not Supported 00:07:46.212 Memory Page Size Minimum: 4096 bytes 00:07:46.212 Memory Page Size Maximum: 65536 bytes 00:07:46.212 Persistent Memory Region: Not Supported 00:07:46.212 Optional Asynchronous Events Supported 00:07:46.212 Namespace Attribute Notices: Supported 00:07:46.212 Firmware Activation Notices: Not Supported 00:07:46.212 ANA Change Notices: Not Supported 00:07:46.212 PLE Aggregate Log Change Notices: Not Supported 00:07:46.212 LBA Status Info Alert Notices: Not Supported 00:07:46.212 EGE Aggregate Log Change Notices: Not Supported 00:07:46.212 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.212 Zone Descriptor Change Notices: Not Supported 00:07:46.212 Discovery Log Change Notices: Not Supported 00:07:46.212 Controller Attributes 00:07:46.212 128-bit Host Identifier: Not Supported 00:07:46.212 Non-Operational Permissive Mode: Not Supported 00:07:46.212 NVM Sets: Not Supported 00:07:46.212 Read Recovery Levels: Not Supported 00:07:46.212 Endurance Groups: Supported 00:07:46.212 Predictable Latency Mode: Not Supported 00:07:46.212 Traffic Based Keep ALive: Not Supported 00:07:46.212 Namespace Granularity: Not Supported 00:07:46.212 SQ Associations: Not Supported 00:07:46.212 UUID List: Not Supported 00:07:46.212 Multi-Domain Subsystem: Not Supported 00:07:46.212 Fixed Capacity Management: Not Supported 00:07:46.212 Variable Capacity Management: Not Supported 00:07:46.212 Delete Endurance Group: Not Supported 00:07:46.212 Delete NVM Set: Not Supported 00:07:46.212 Extended LBA Formats Supported: Supported 00:07:46.212 Flexible Data Placement Supported: Supported 00:07:46.212 00:07:46.212 Controller Memory Buffer Support 00:07:46.212 ================================ 00:07:46.212 Supported: No 00:07:46.212 00:07:46.212 Persistent Memory Region Support 00:07:46.212 ================================ 00:07:46.212 Supported: No 00:07:46.212 00:07:46.212 Admin Command Set Attributes 00:07:46.212 ============================ 00:07:46.212 Security Send/Receive: Not Supported 00:07:46.212 Format NVM: Supported 00:07:46.212 Firmware Activate/Download: Not Supported 00:07:46.212 Namespace Management: Supported 00:07:46.212 Device Self-Test: Not Supported 00:07:46.212 Directives: Supported 00:07:46.212 NVMe-MI: Not Supported 00:07:46.212 Virtualization Management: Not Supported 00:07:46.212 Doorbell Buffer Config: Supported 00:07:46.212 Get LBA Status Capability: Not Supported 00:07:46.212 Command & Feature Lockdown Capability: Not Supported 00:07:46.212 Abort Command Limit: 4 00:07:46.212 Async Event Request Limit: 4 00:07:46.212 Number of Firmware Slots: N/A 00:07:46.212 Firmware Slot 1 Read-Only: N/A 00:07:46.212 Firmware Activation Without Reset: N/A 00:07:46.212 Multiple Update Detection Support: N/A 00:07:46.212 Firmware Update Granularity: No Information Provided 00:07:46.212 Per-Namespace SMART Log: Yes 00:07:46.212 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.212 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:46.212 Command Effects Log Page: Supported 00:07:46.212 Get Log Page Extended Data: Supported 00:07:46.212 Telemetry Log Pages: Not Supported 00:07:46.212 Persistent Event Log Pages: Not Supported 00:07:46.212 Supported Log Pages Log Page: May Support 00:07:46.212 Commands Supported & Effects Log Page: Not Supported 00:07:46.212 Feature Identifiers & Effects Log Page:May Support 00:07:46.212 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.212 Data Area 4 for Telemetry Log: Not Supported 00:07:46.212 Error Log Page Entries Supported: 1 00:07:46.212 Keep Alive: Not Supported 00:07:46.212 00:07:46.212 NVM Command Set Attributes 00:07:46.212 ========================== 00:07:46.212 Submission Queue Entry Size 00:07:46.212 Max: 64 00:07:46.212 Min: 64 00:07:46.212 Completion Queue Entry Size 00:07:46.212 Max: 16 00:07:46.212 Min: 16 00:07:46.212 Number of Namespaces: 256 00:07:46.212 Compare Command: Supported 00:07:46.213 Write Uncorrectable Command: Not Supported 00:07:46.213 Dataset Management Command: Supported 00:07:46.213 Write Zeroes Command: Supported 00:07:46.213 Set Features Save Field: Supported 00:07:46.213 Reservations: Not Supported 00:07:46.213 Timestamp: Supported 00:07:46.213 Copy: Supported 00:07:46.213 Volatile Write Cache: Present 00:07:46.213 Atomic Write Unit (Normal): 1 00:07:46.213 Atomic Write Unit (PFail): 1 00:07:46.213 Atomic Compare & Write Unit: 1 00:07:46.213 Fused Compare & Write: Not Supported 00:07:46.213 Scatter-Gather List 00:07:46.213 SGL Command Set: Supported 00:07:46.213 SGL Keyed: Not Supported 00:07:46.213 SGL Bit Bucket Descriptor: Not Supported 00:07:46.213 SGL Metadata Pointer: Not Supported 00:07:46.213 Oversized SGL: Not Supported 00:07:46.213 SGL Metadata Address: Not Supported 00:07:46.213 SGL Offset: Not Supported 00:07:46.213 Transport SGL Data Block: Not Supported 00:07:46.213 Replay Protected Memory Block: Not Supported 00:07:46.213 00:07:46.213 Firmware Slot Information 00:07:46.213 ========================= 00:07:46.213 Active slot: 1 00:07:46.213 Slot 1 Firmware Revision: 1.0 00:07:46.213 00:07:46.213 00:07:46.213 Commands Supported and Effects 00:07:46.213 ============================== 00:07:46.213 Admin Commands 00:07:46.213 -------------- 00:07:46.213 Delete I/O Submission Queue (00h): Supported 00:07:46.213 Create I/O Submission Queue (01h): Supported 00:07:46.213 Get Log Page (02h): Supported 00:07:46.213 Delete I/O Completion Queue (04h): Supported 00:07:46.213 Create I/O Completion Queue (05h): Supported 00:07:46.213 Identify (06h): Supported 00:07:46.213 Abort (08h): Supported 00:07:46.213 Set Features (09h): Supported 00:07:46.213 Get Features (0Ah): Supported 00:07:46.213 Asynchronous Event Request (0Ch): Supported 00:07:46.213 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.213 Directive Send (19h): Supported 00:07:46.213 Directive Receive (1Ah): Supported 00:07:46.213 Virtualization Management (1Ch): Supported 00:07:46.213 Doorbell Buffer Config (7Ch): Supported 00:07:46.213 Format NVM (80h): Supported LBA-Change 00:07:46.213 I/O Commands 00:07:46.213 ------------ 00:07:46.213 Flush (00h): Supported LBA-Change 00:07:46.213 Write (01h): Supported LBA-Change 00:07:46.213 Read (02h): Supported 00:07:46.213 Compare (05h): Supported 00:07:46.213 Write Zeroes (08h): Supported LBA-Change 00:07:46.213 Dataset Management (09h): Supported LBA-Change 00:07:46.213 Unknown (0Ch): Supported 00:07:46.213 Unknown (12h): Supported 00:07:46.213 Copy (19h): Supported LBA-Change 00:07:46.213 Unknown (1Dh): Supported LBA-Change 00:07:46.213 00:07:46.213 Error Log 00:07:46.213 ========= 00:07:46.213 00:07:46.213 Arbitration 00:07:46.213 =========== 00:07:46.213 Arbitration Burst: no limit 00:07:46.213 00:07:46.213 Power Management 00:07:46.213 ================ 00:07:46.213 Number of Power States: 1 00:07:46.213 Current Power State: Power State #0 00:07:46.213 Power State #0: 00:07:46.213 Max Power: 25.00 W 00:07:46.213 Non-Operational State: Operational 00:07:46.213 Entry Latency: 16 microseconds 00:07:46.213 Exit Latency: 4 microseconds 00:07:46.213 Relative Read Throughput: 0 00:07:46.213 Relative Read Latency: 0 00:07:46.213 Relative Write Throughput: 0 00:07:46.213 Relative Write Latency: 0 00:07:46.213 Idle Power: Not Reported 00:07:46.213 Active Power: Not Reported 00:07:46.213 Non-Operational Permissive Mode: Not Supported 00:07:46.213 00:07:46.213 Health Information 00:07:46.213 ================== 00:07:46.213 Critical Warnings: 00:07:46.213 Available Spare Space: OK 00:07:46.213 Temperature: OK 00:07:46.213 Device Reliability: OK 00:07:46.213 Read Only: No 00:07:46.213 Volatile Memory Backup: OK 00:07:46.213 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.213 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.213 Available Spare: 0% 00:07:46.213 Available Spare Threshold: 0% 00:07:46.213 Life Percentage Used: 0% 00:07:46.213 Data Units Read: 827 00:07:46.213 Data Units Written: 756 00:07:46.213 Host Read Commands: 36837 00:07:46.213 Host Write Commands: 36261 00:07:46.213 Controller Busy Time: 0 minutes 00:07:46.213 Power Cycles: 0 00:07:46.213 Power On Hours: 0 hours 00:07:46.213 Unsafe Shutdowns: 0 00:07:46.213 Unrecoverable Media Errors: 0 00:07:46.213 Lifetime Error Log Entries: 0 00:07:46.213 Warning Temperature Time: 0 minutes 00:07:46.213 Critical Temperature Time: 0 minutes 00:07:46.213 00:07:46.213 Number of Queues 00:07:46.213 ================ 00:07:46.213 Number of I/O Submission Queues: 64 00:07:46.213 Number of I/O Completion Queues: 64 00:07:46.213 00:07:46.213 ZNS Specific Controller Data 00:07:46.213 ============================ 00:07:46.213 Zone Append Size Limit: 0 00:07:46.213 00:07:46.213 00:07:46.213 Active Namespaces 00:07:46.213 ================= 00:07:46.213 Namespace ID:1 00:07:46.213 Error Recovery Timeout: Unlimited 00:07:46.213 Command Set Identifier: NVM (00h) 00:07:46.213 Deallocate: Supported 00:07:46.213 Deallocated/Unwritten Error: Supported 00:07:46.213 Deallocated Read Value: All 0x00 00:07:46.213 Deallocate in Write Zeroes: Not Supported 00:07:46.213 Deallocated Guard Field: 0xFFFF 00:07:46.213 Flush: Supported 00:07:46.213 Reservation: Not Supported 00:07:46.213 Namespace Sharing Capabilities: Multiple Controllers 00:07:46.213 Size (in LBAs): 262144 (1GiB) 00:07:46.213 Capacity (in LBAs): 262144 (1GiB) 00:07:46.213 Utilization (in LBAs): 262144 (1GiB) 00:07:46.213 Thin Provisioning: Not Supported 00:07:46.213 Per-NS Atomic Units: No 00:07:46.213 Maximum Single Source Range Length: 128 00:07:46.213 Maximum Copy Length: 128 00:07:46.213 Maximum Source Range Count: 128 00:07:46.213 NGUID/EUI64 Never Reused: No 00:07:46.213 Namespace Write Protected: No 00:07:46.213 Endurance group ID: 1 00:07:46.213 Number of LBA Formats: 8 00:07:46.213 Current LBA Format: LBA Format #04 00:07:46.213 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.213 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.213 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.213 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.213 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.213 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.213 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.213 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.213 00:07:46.213 Get Feature FDP: 00:07:46.213 ================ 00:07:46.213 Enabled: Yes 00:07:46.213 FDP configuration index: 0 00:07:46.213 00:07:46.213 FDP configurations log page 00:07:46.213 =========================== 00:07:46.213 Number of FDP configurations: 1 00:07:46.213 Version: 0 00:07:46.213 Size: 112 00:07:46.213 FDP Configuration Descriptor: 0 00:07:46.213 Descriptor Size: 96 00:07:46.213 Reclaim Group Identifier format: 2 00:07:46.213 FDP Volatile Write Cache: Not Present 00:07:46.213 FDP Configuration: Valid 00:07:46.213 Vendor Specific Size: 0 00:07:46.213 Number of Reclaim Groups: 2 00:07:46.213 Number of Recalim Unit Handles: 8 00:07:46.213 Max Placement Identifiers: 128 00:07:46.213 Number of Namespaces Suppprted: 256 00:07:46.213 Reclaim unit Nominal Size: 6000000 bytes 00:07:46.213 Estimated Reclaim Unit Time Limit: Not Reported 00:07:46.213 RUH Desc #000: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #001: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #002: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #003: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #004: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #005: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #006: RUH Type: Initially Isolated 00:07:46.213 RUH Desc #007: RUH Type: Initially Isolated 00:07:46.213 00:07:46.213 FDP reclaim unit handle usage log page 00:07:46.213 ====================================== 00:07:46.213 Number of Reclaim Unit Handles: 8 00:07:46.213 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:46.213 RUH Usage Desc #001: RUH Attributes: Unused 00:07:46.213 RUH Usage Desc #002: RUH Attributes: Unused 00:07:46.213 RUH Usage Desc #003: RUH Attributes: Unused 00:07:46.213 RUH Usage Desc #004: RUH Attributes: Unused 00:07:46.213 R[2024-12-05 02:52:16.991565] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62806 terminated unexpected 00:07:46.213 UH Usage Desc #005: RUH Attributes: Unused 00:07:46.213 RUH Usage Desc #006: RUH Attributes: Unused 00:07:46.213 RUH Usage Desc #007: RUH Attributes: Unused 00:07:46.213 00:07:46.213 FDP statistics log page 00:07:46.213 ======================= 00:07:46.213 Host bytes with metadata written: 489529344 00:07:46.213 Media bytes with metadata written: 489582592 00:07:46.213 Media bytes erased: 0 00:07:46.213 00:07:46.213 FDP events log page 00:07:46.213 =================== 00:07:46.213 Number of FDP events: 0 00:07:46.214 00:07:46.214 NVM Specific Namespace Data 00:07:46.214 =========================== 00:07:46.214 Logical Block Storage Tag Mask: 0 00:07:46.214 Protection Information Capabilities: 00:07:46.214 16b Guard Protection Information Storage Tag Support: No 00:07:46.214 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.214 Storage Tag Check Read Support: No 00:07:46.214 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.214 ===================================================== 00:07:46.214 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.214 ===================================================== 00:07:46.214 Controller Capabilities/Features 00:07:46.214 ================================ 00:07:46.214 Vendor ID: 1b36 00:07:46.214 Subsystem Vendor ID: 1af4 00:07:46.214 Serial Number: 12340 00:07:46.214 Model Number: QEMU NVMe Ctrl 00:07:46.214 Firmware Version: 8.0.0 00:07:46.214 Recommended Arb Burst: 6 00:07:46.214 IEEE OUI Identifier: 00 54 52 00:07:46.214 Multi-path I/O 00:07:46.214 May have multiple subsystem ports: No 00:07:46.214 May have multiple controllers: No 00:07:46.214 Associated with SR-IOV VF: No 00:07:46.214 Max Data Transfer Size: 524288 00:07:46.214 Max Number of Namespaces: 256 00:07:46.214 Max Number of I/O Queues: 64 00:07:46.214 NVMe Specification Version (VS): 1.4 00:07:46.214 NVMe Specification Version (Identify): 1.4 00:07:46.214 Maximum Queue Entries: 2048 00:07:46.214 Contiguous Queues Required: Yes 00:07:46.214 Arbitration Mechanisms Supported 00:07:46.214 Weighted Round Robin: Not Supported 00:07:46.214 Vendor Specific: Not Supported 00:07:46.214 Reset Timeout: 7500 ms 00:07:46.214 Doorbell Stride: 4 bytes 00:07:46.214 NVM Subsystem Reset: Not Supported 00:07:46.214 Command Sets Supported 00:07:46.214 NVM Command Set: Supported 00:07:46.214 Boot Partition: Not Supported 00:07:46.214 Memory Page Size Minimum: 4096 bytes 00:07:46.214 Memory Page Size Maximum: 65536 bytes 00:07:46.214 Persistent Memory Region: Not Supported 00:07:46.214 Optional Asynchronous Events Supported 00:07:46.214 Namespace Attribute Notices: Supported 00:07:46.214 Firmware Activation Notices: Not Supported 00:07:46.214 ANA Change Notices: Not Supported 00:07:46.214 PLE Aggregate Log Change Notices: Not Supported 00:07:46.214 LBA Status Info Alert Notices: Not Supported 00:07:46.214 EGE Aggregate Log Change Notices: Not Supported 00:07:46.214 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.214 Zone Descriptor Change Notices: Not Supported 00:07:46.214 Discovery Log Change Notices: Not Supported 00:07:46.214 Controller Attributes 00:07:46.214 128-bit Host Identifier: Not Supported 00:07:46.214 Non-Operational Permissive Mode: Not Supported 00:07:46.214 NVM Sets: Not Supported 00:07:46.214 Read Recovery Levels: Not Supported 00:07:46.214 Endurance Groups: Not Supported 00:07:46.214 Predictable Latency Mode: Not Supported 00:07:46.214 Traffic Based Keep ALive: Not Supported 00:07:46.214 Namespace Granularity: Not Supported 00:07:46.214 SQ Associations: Not Supported 00:07:46.214 UUID List: Not Supported 00:07:46.214 Multi-Domain Subsystem: Not Supported 00:07:46.214 Fixed Capacity Management: Not Supported 00:07:46.214 Variable Capacity Management: Not Supported 00:07:46.214 Delete Endurance Group: Not Supported 00:07:46.214 Delete NVM Set: Not Supported 00:07:46.214 Extended LBA Formats Supported: Supported 00:07:46.214 Flexible Data Placement Supported: Not Supported 00:07:46.214 00:07:46.214 Controller Memory Buffer Support 00:07:46.214 ================================ 00:07:46.214 Supported: No 00:07:46.214 00:07:46.214 Persistent Memory Region Support 00:07:46.214 ================================ 00:07:46.214 Supported: No 00:07:46.214 00:07:46.214 Admin Command Set Attributes 00:07:46.214 ============================ 00:07:46.214 Security Send/Receive: Not Supported 00:07:46.214 Format NVM: Supported 00:07:46.214 Firmware Activate/Download: Not Supported 00:07:46.214 Namespace Management: Supported 00:07:46.214 Device Self-Test: Not Supported 00:07:46.214 Directives: Supported 00:07:46.214 NVMe-MI: Not Supported 00:07:46.214 Virtualization Management: Not Supported 00:07:46.214 Doorbell Buffer Config: Supported 00:07:46.214 Get LBA Status Capability: Not Supported 00:07:46.214 Command & Feature Lockdown Capability: Not Supported 00:07:46.214 Abort Command Limit: 4 00:07:46.214 Async Event Request Limit: 4 00:07:46.214 Number of Firmware Slots: N/A 00:07:46.214 Firmware Slot 1 Read-Only: N/A 00:07:46.214 Firmware Activation Without Reset: N/A 00:07:46.214 Multiple Update Detection Support: N/A 00:07:46.214 Firmware Update Granularity: No Information Provided 00:07:46.214 Per-Namespace SMART Log: Yes 00:07:46.214 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.214 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:46.214 Command Effects Log Page: Supported 00:07:46.214 Get Log Page Extended Data: Supported 00:07:46.214 Telemetry Log Pages: Not Supported 00:07:46.214 Persistent Event Log Pages: Not Supported 00:07:46.214 Supported Log Pages Log Page: May Support 00:07:46.214 Commands Supported & Effects Log Page: Not Supported 00:07:46.214 Feature Identifiers & Effects Log Page:May Support 00:07:46.214 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.214 Data Area 4 for Telemetry Log: Not Supported 00:07:46.214 Error Log Page Entries Supported: 1 00:07:46.214 Keep Alive: Not Supported 00:07:46.214 00:07:46.214 NVM Command Set Attributes 00:07:46.214 ========================== 00:07:46.214 Submission Queue Entry Size 00:07:46.214 Max: 64 00:07:46.214 Min: 64 00:07:46.214 Completion Queue Entry Size 00:07:46.214 Max: 16 00:07:46.214 Min: 16 00:07:46.214 Number of Namespaces: 256 00:07:46.214 Compare Command: Supported 00:07:46.214 Write Uncorrectable Command: Not Supported 00:07:46.214 Dataset Management Command: Supported 00:07:46.214 Write Zeroes Command: Supported 00:07:46.214 Set Features Save Field: Supported 00:07:46.214 Reservations: Not Supported 00:07:46.214 Timestamp: Supported 00:07:46.214 Copy: Supported 00:07:46.214 Volatile Write Cache: Present 00:07:46.214 Atomic Write Unit (Normal): 1 00:07:46.214 Atomic Write Unit (PFail): 1 00:07:46.214 Atomic Compare & Write Unit: 1 00:07:46.214 Fused Compare & Write: Not Supported 00:07:46.214 Scatter-Gather List 00:07:46.214 SGL Command Set: Supported 00:07:46.214 SGL Keyed: Not Supported 00:07:46.214 SGL Bit Bucket Descriptor: Not Supported 00:07:46.214 SGL Metadata Pointer: Not Supported 00:07:46.214 Oversized SGL: Not Supported 00:07:46.214 SGL Metadata Address: Not Supported 00:07:46.214 SGL Offset: Not Supported 00:07:46.214 Transport SGL Data Block: Not Supported 00:07:46.214 Replay Protected Memory Block: Not Supported 00:07:46.214 00:07:46.214 Firmware Slot Information 00:07:46.214 ========================= 00:07:46.214 Active slot: 1 00:07:46.214 Slot 1 Firmware Revision: 1.0 00:07:46.214 00:07:46.214 00:07:46.214 Commands Supported and Effects 00:07:46.214 ============================== 00:07:46.214 Admin Commands 00:07:46.214 -------------- 00:07:46.214 Delete I/O Submission Queue (00h): Supported 00:07:46.214 Create I/O Submission Queue (01h): Supported 00:07:46.214 Get Log Page (02h): Supported 00:07:46.214 Delete I/O Completion Queue (04h): Supported 00:07:46.214 Create I/O Completion Queue (05h): Supported 00:07:46.214 Identify (06h): Supported 00:07:46.214 Abort (08h): Supported 00:07:46.214 Set Features (09h): Supported 00:07:46.214 Get Features (0Ah): Supported 00:07:46.214 Asynchronous Event Request (0Ch): Supported 00:07:46.214 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.214 Directive Send (19h): Supported 00:07:46.214 Directive Receive (1Ah): Supported 00:07:46.214 Virtualization Management (1Ch): Supported 00:07:46.214 Doorbell Buffer Config (7Ch): Supported 00:07:46.214 Format NVM (80h): Supported LBA-Change 00:07:46.214 I/O Commands 00:07:46.214 ------------ 00:07:46.214 Flush (00h): Supported LBA-Change 00:07:46.214 Write (01h): Supported LBA-Change 00:07:46.214 Read (02h): Supported 00:07:46.214 Compare (05h): Supported 00:07:46.214 Write Zeroes (08h): Supported LBA-Change 00:07:46.214 Dataset Management (09h): Supported LBA-Change 00:07:46.214 Unknown (0Ch): Supported 00:07:46.214 Unknown (12h): Supported 00:07:46.215 Copy (19h): Supported LBA-Change 00:07:46.215 Unknown (1Dh): Supported LBA-Change 00:07:46.215 00:07:46.215 Error Log 00:07:46.215 ========= 00:07:46.215 00:07:46.215 Arbitration 00:07:46.215 =========== 00:07:46.215 Arbitration Burst: no limit 00:07:46.215 00:07:46.215 Power Management 00:07:46.215 ================ 00:07:46.215 Number of Power States: 1 00:07:46.215 Current Power State: Power State #0 00:07:46.215 Power State #0: 00:07:46.215 Max Power: 25.00 W 00:07:46.215 Non-Operational State: Operational 00:07:46.215 Entry Latency: 16 microseconds 00:07:46.215 Exit Latency: 4 microseconds 00:07:46.215 Relative Read Throughput: 0 00:07:46.215 Relative Read Latency: 0 00:07:46.215 Relative Write Throughput: 0 00:07:46.215 Relative Write Latency: 0 00:07:46.215 Idle Power: Not Reported 00:07:46.215 Active Power: Not Reported 00:07:46.215 Non-Operational Permissive Mode: Not Supported 00:07:46.215 00:07:46.215 Health Information 00:07:46.215 ================== 00:07:46.215 Critical Warnings: 00:07:46.215 Available Spare Space: OK 00:07:46.215 Temperature: OK 00:07:46.215 Device Reliability: OK 00:07:46.215 Read Only: No 00:07:46.215 Volatile Memory Backup: OK 00:07:46.215 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.215 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.215 Available Spare: 0% 00:07:46.215 Available Spare Threshold: 0% 00:07:46.215 Life Percentage Used: 0% 00:07:46.215 Data Units Read: 641 00:07:46.215 Data Units Written: 569 00:07:46.215 Host Read Commands: 34905 00:07:46.215 Host Write Commands: 34691 00:07:46.215 Controller Busy Time: 0 minutes 00:07:46.215 Power Cycles: 0 00:07:46.215 Power On Hours: 0 hours 00:07:46.215 Unsafe Shutdowns: 0 00:07:46.215 Unrecoverable Media Errors: 0 00:07:46.215 Lifetime Error Log Entries: 0 00:07:46.215 Warning Temperature Time: 0 minutes 00:07:46.215 Critical Temperature Time: 0 minutes 00:07:46.215 00:07:46.215 Number of Queues 00:07:46.215 ================ 00:07:46.215 Number of I/O Submission Queues: 64 00:07:46.215 Number of I/O Completion Queues: 64 00:07:46.215 00:07:46.215 ZNS Specific Controller Data 00:07:46.215 ============================ 00:07:46.215 Zone Append Size Limit: 0 00:07:46.215 00:07:46.215 00:07:46.215 Active Namespaces 00:07:46.215 ================= 00:07:46.215 Namespace ID:1 00:07:46.215 Error Recovery Timeout: Unlimited 00:07:46.215 Command Set Identifier: NVM (00h) 00:07:46.215 Deallocate: Supported 00:07:46.215 Deallocated/Unwritten Error: Supported 00:07:46.215 Deallocated Read Value: All 0x00 00:07:46.215 Deallocate in Write Zeroes: Not Supported 00:07:46.215 Deallocated Guard Field: 0xFFFF 00:07:46.215 Flush: Supported 00:07:46.215 Reservation: Not Supported 00:07:46.215 Metadata Transferred as: Separate Metadata Buffer 00:07:46.215 Namespace Sharing Capabilities: Private 00:07:46.215 Size (in LBAs): 1548666 (5GiB) 00:07:46.215 Capacity (in LBAs): 1548666 (5GiB) 00:07:46.215 Utilization (in LBAs): 1548666 (5GiB) 00:07:46.215 Thin Provisioning: Not Supported 00:07:46.215 Per-NS Atomic Units: No 00:07:46.215 Maximum Single Source Range Length: 128 00:07:46.215 Maximum Copy Length: 128 00:07:46.215 Maximum Source Range Count: 128 00:07:46.215 NGUID/EUI64 Never Reused: No 00:07:46.215 Namespace Write Protected: No 00:07:46.215 Number of LBA Formats: 8 00:07:46.215 Current LBA Format: LBA Format #07 00:07:46.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.215 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.215 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.215 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.215 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.215 LBA Forma[2024-12-05 02:52:16.993355] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62806 terminated unexpected 00:07:46.215 t #05: Data Size: 4096 Metadata Size: 8 00:07:46.215 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.215 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.215 00:07:46.215 NVM Specific Namespace Data 00:07:46.215 =========================== 00:07:46.215 Logical Block Storage Tag Mask: 0 00:07:46.215 Protection Information Capabilities: 00:07:46.215 16b Guard Protection Information Storage Tag Support: No 00:07:46.215 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.215 Storage Tag Check Read Support: No 00:07:46.215 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.215 ===================================================== 00:07:46.215 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.215 ===================================================== 00:07:46.215 Controller Capabilities/Features 00:07:46.215 ================================ 00:07:46.215 Vendor ID: 1b36 00:07:46.215 Subsystem Vendor ID: 1af4 00:07:46.215 Serial Number: 12342 00:07:46.215 Model Number: QEMU NVMe Ctrl 00:07:46.215 Firmware Version: 8.0.0 00:07:46.215 Recommended Arb Burst: 6 00:07:46.215 IEEE OUI Identifier: 00 54 52 00:07:46.215 Multi-path I/O 00:07:46.215 May have multiple subsystem ports: No 00:07:46.215 May have multiple controllers: No 00:07:46.215 Associated with SR-IOV VF: No 00:07:46.215 Max Data Transfer Size: 524288 00:07:46.215 Max Number of Namespaces: 256 00:07:46.215 Max Number of I/O Queues: 64 00:07:46.215 NVMe Specification Version (VS): 1.4 00:07:46.215 NVMe Specification Version (Identify): 1.4 00:07:46.215 Maximum Queue Entries: 2048 00:07:46.215 Contiguous Queues Required: Yes 00:07:46.215 Arbitration Mechanisms Supported 00:07:46.215 Weighted Round Robin: Not Supported 00:07:46.215 Vendor Specific: Not Supported 00:07:46.215 Reset Timeout: 7500 ms 00:07:46.215 Doorbell Stride: 4 bytes 00:07:46.215 NVM Subsystem Reset: Not Supported 00:07:46.215 Command Sets Supported 00:07:46.215 NVM Command Set: Supported 00:07:46.215 Boot Partition: Not Supported 00:07:46.215 Memory Page Size Minimum: 4096 bytes 00:07:46.215 Memory Page Size Maximum: 65536 bytes 00:07:46.215 Persistent Memory Region: Not Supported 00:07:46.215 Optional Asynchronous Events Supported 00:07:46.215 Namespace Attribute Notices: Supported 00:07:46.215 Firmware Activation Notices: Not Supported 00:07:46.215 ANA Change Notices: Not Supported 00:07:46.215 PLE Aggregate Log Change Notices: Not Supported 00:07:46.215 LBA Status Info Alert Notices: Not Supported 00:07:46.215 EGE Aggregate Log Change Notices: Not Supported 00:07:46.215 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.215 Zone Descriptor Change Notices: Not Supported 00:07:46.215 Discovery Log Change Notices: Not Supported 00:07:46.215 Controller Attributes 00:07:46.215 128-bit Host Identifier: Not Supported 00:07:46.215 Non-Operational Permissive Mode: Not Supported 00:07:46.215 NVM Sets: Not Supported 00:07:46.215 Read Recovery Levels: Not Supported 00:07:46.215 Endurance Groups: Not Supported 00:07:46.216 Predictable Latency Mode: Not Supported 00:07:46.216 Traffic Based Keep ALive: Not Supported 00:07:46.216 Namespace Granularity: Not Supported 00:07:46.216 SQ Associations: Not Supported 00:07:46.216 UUID List: Not Supported 00:07:46.216 Multi-Domain Subsystem: Not Supported 00:07:46.216 Fixed Capacity Management: Not Supported 00:07:46.216 Variable Capacity Management: Not Supported 00:07:46.216 Delete Endurance Group: Not Supported 00:07:46.216 Delete NVM Set: Not Supported 00:07:46.216 Extended LBA Formats Supported: Supported 00:07:46.216 Flexible Data Placement Supported: Not Supported 00:07:46.216 00:07:46.216 Controller Memory Buffer Support 00:07:46.216 ================================ 00:07:46.216 Supported: No 00:07:46.216 00:07:46.216 Persistent Memory Region Support 00:07:46.216 ================================ 00:07:46.216 Supported: No 00:07:46.216 00:07:46.216 Admin Command Set Attributes 00:07:46.216 ============================ 00:07:46.216 Security Send/Receive: Not Supported 00:07:46.216 Format NVM: Supported 00:07:46.216 Firmware Activate/Download: Not Supported 00:07:46.216 Namespace Management: Supported 00:07:46.216 Device Self-Test: Not Supported 00:07:46.216 Directives: Supported 00:07:46.216 NVMe-MI: Not Supported 00:07:46.216 Virtualization Management: Not Supported 00:07:46.216 Doorbell Buffer Config: Supported 00:07:46.216 Get LBA Status Capability: Not Supported 00:07:46.216 Command & Feature Lockdown Capability: Not Supported 00:07:46.216 Abort Command Limit: 4 00:07:46.216 Async Event Request Limit: 4 00:07:46.216 Number of Firmware Slots: N/A 00:07:46.216 Firmware Slot 1 Read-Only: N/A 00:07:46.216 Firmware Activation Without Reset: N/A 00:07:46.216 Multiple Update Detection Support: N/A 00:07:46.216 Firmware Update Granularity: No Information Provided 00:07:46.216 Per-Namespace SMART Log: Yes 00:07:46.216 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.216 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:46.216 Command Effects Log Page: Supported 00:07:46.216 Get Log Page Extended Data: Supported 00:07:46.216 Telemetry Log Pages: Not Supported 00:07:46.216 Persistent Event Log Pages: Not Supported 00:07:46.216 Supported Log Pages Log Page: May Support 00:07:46.216 Commands Supported & Effects Log Page: Not Supported 00:07:46.216 Feature Identifiers & Effects Log Page:May Support 00:07:46.216 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.216 Data Area 4 for Telemetry Log: Not Supported 00:07:46.216 Error Log Page Entries Supported: 1 00:07:46.216 Keep Alive: Not Supported 00:07:46.216 00:07:46.216 NVM Command Set Attributes 00:07:46.216 ========================== 00:07:46.216 Submission Queue Entry Size 00:07:46.216 Max: 64 00:07:46.216 Min: 64 00:07:46.216 Completion Queue Entry Size 00:07:46.216 Max: 16 00:07:46.216 Min: 16 00:07:46.216 Number of Namespaces: 256 00:07:46.216 Compare Command: Supported 00:07:46.216 Write Uncorrectable Command: Not Supported 00:07:46.216 Dataset Management Command: Supported 00:07:46.216 Write Zeroes Command: Supported 00:07:46.216 Set Features Save Field: Supported 00:07:46.216 Reservations: Not Supported 00:07:46.216 Timestamp: Supported 00:07:46.216 Copy: Supported 00:07:46.216 Volatile Write Cache: Present 00:07:46.216 Atomic Write Unit (Normal): 1 00:07:46.216 Atomic Write Unit (PFail): 1 00:07:46.216 Atomic Compare & Write Unit: 1 00:07:46.216 Fused Compare & Write: Not Supported 00:07:46.216 Scatter-Gather List 00:07:46.216 SGL Command Set: Supported 00:07:46.216 SGL Keyed: Not Supported 00:07:46.216 SGL Bit Bucket Descriptor: Not Supported 00:07:46.216 SGL Metadata Pointer: Not Supported 00:07:46.216 Oversized SGL: Not Supported 00:07:46.216 SGL Metadata Address: Not Supported 00:07:46.216 SGL Offset: Not Supported 00:07:46.216 Transport SGL Data Block: Not Supported 00:07:46.216 Replay Protected Memory Block: Not Supported 00:07:46.216 00:07:46.216 Firmware Slot Information 00:07:46.216 ========================= 00:07:46.216 Active slot: 1 00:07:46.216 Slot 1 Firmware Revision: 1.0 00:07:46.216 00:07:46.216 00:07:46.216 Commands Supported and Effects 00:07:46.216 ============================== 00:07:46.216 Admin Commands 00:07:46.216 -------------- 00:07:46.216 Delete I/O Submission Queue (00h): Supported 00:07:46.216 Create I/O Submission Queue (01h): Supported 00:07:46.216 Get Log Page (02h): Supported 00:07:46.216 Delete I/O Completion Queue (04h): Supported 00:07:46.216 Create I/O Completion Queue (05h): Supported 00:07:46.216 Identify (06h): Supported 00:07:46.216 Abort (08h): Supported 00:07:46.216 Set Features (09h): Supported 00:07:46.216 Get Features (0Ah): Supported 00:07:46.216 Asynchronous Event Request (0Ch): Supported 00:07:46.216 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.216 Directive Send (19h): Supported 00:07:46.216 Directive Receive (1Ah): Supported 00:07:46.216 Virtualization Management (1Ch): Supported 00:07:46.216 Doorbell Buffer Config (7Ch): Supported 00:07:46.216 Format NVM (80h): Supported LBA-Change 00:07:46.216 I/O Commands 00:07:46.216 ------------ 00:07:46.216 Flush (00h): Supported LBA-Change 00:07:46.216 Write (01h): Supported LBA-Change 00:07:46.216 Read (02h): Supported 00:07:46.216 Compare (05h): Supported 00:07:46.216 Write Zeroes (08h): Supported LBA-Change 00:07:46.216 Dataset Management (09h): Supported LBA-Change 00:07:46.216 Unknown (0Ch): Supported 00:07:46.216 Unknown (12h): Supported 00:07:46.216 Copy (19h): Supported LBA-Change 00:07:46.216 Unknown (1Dh): Supported LBA-Change 00:07:46.216 00:07:46.216 Error Log 00:07:46.216 ========= 00:07:46.216 00:07:46.216 Arbitration 00:07:46.216 =========== 00:07:46.216 Arbitration Burst: no limit 00:07:46.216 00:07:46.216 Power Management 00:07:46.216 ================ 00:07:46.216 Number of Power States: 1 00:07:46.216 Current Power State: Power State #0 00:07:46.216 Power State #0: 00:07:46.216 Max Power: 25.00 W 00:07:46.216 Non-Operational State: Operational 00:07:46.216 Entry Latency: 16 microseconds 00:07:46.216 Exit Latency: 4 microseconds 00:07:46.216 Relative Read Throughput: 0 00:07:46.216 Relative Read Latency: 0 00:07:46.216 Relative Write Throughput: 0 00:07:46.216 Relative Write Latency: 0 00:07:46.216 Idle Power: Not Reported 00:07:46.216 Active Power: Not Reported 00:07:46.216 Non-Operational Permissive Mode: Not Supported 00:07:46.216 00:07:46.216 Health Information 00:07:46.216 ================== 00:07:46.216 Critical Warnings: 00:07:46.216 Available Spare Space: OK 00:07:46.216 Temperature: OK 00:07:46.216 Device Reliability: OK 00:07:46.216 Read Only: No 00:07:46.216 Volatile Memory Backup: OK 00:07:46.216 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.216 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.216 Available Spare: 0% 00:07:46.216 Available Spare Threshold: 0% 00:07:46.216 Life Percentage Used: 0% 00:07:46.216 Data Units Read: 2160 00:07:46.216 Data Units Written: 1947 00:07:46.216 Host Read Commands: 107596 00:07:46.216 Host Write Commands: 105866 00:07:46.216 Controller Busy Time: 0 minutes 00:07:46.216 Power Cycles: 0 00:07:46.216 Power On Hours: 0 hours 00:07:46.216 Unsafe Shutdowns: 0 00:07:46.216 Unrecoverable Media Errors: 0 00:07:46.216 Lifetime Error Log Entries: 0 00:07:46.216 Warning Temperature Time: 0 minutes 00:07:46.216 Critical Temperature Time: 0 minutes 00:07:46.216 00:07:46.216 Number of Queues 00:07:46.216 ================ 00:07:46.216 Number of I/O Submission Queues: 64 00:07:46.216 Number of I/O Completion Queues: 64 00:07:46.216 00:07:46.216 ZNS Specific Controller Data 00:07:46.216 ============================ 00:07:46.216 Zone Append Size Limit: 0 00:07:46.216 00:07:46.216 00:07:46.216 Active Namespaces 00:07:46.216 ================= 00:07:46.216 Namespace ID:1 00:07:46.216 Error Recovery Timeout: Unlimited 00:07:46.216 Command Set Identifier: NVM (00h) 00:07:46.216 Deallocate: Supported 00:07:46.216 Deallocated/Unwritten Error: Supported 00:07:46.216 Deallocated Read Value: All 0x00 00:07:46.216 Deallocate in Write Zeroes: Not Supported 00:07:46.216 Deallocated Guard Field: 0xFFFF 00:07:46.216 Flush: Supported 00:07:46.216 Reservation: Not Supported 00:07:46.216 Namespace Sharing Capabilities: Private 00:07:46.216 Size (in LBAs): 1048576 (4GiB) 00:07:46.216 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.216 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.216 Thin Provisioning: Not Supported 00:07:46.216 Per-NS Atomic Units: No 00:07:46.216 Maximum Single Source Range Length: 128 00:07:46.217 Maximum Copy Length: 128 00:07:46.217 Maximum Source Range Count: 128 00:07:46.217 NGUID/EUI64 Never Reused: No 00:07:46.217 Namespace Write Protected: No 00:07:46.217 Number of LBA Formats: 8 00:07:46.217 Current LBA Format: LBA Format #04 00:07:46.217 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.217 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.217 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.217 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.217 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.217 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.217 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.217 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.217 00:07:46.217 NVM Specific Namespace Data 00:07:46.217 =========================== 00:07:46.217 Logical Block Storage Tag Mask: 0 00:07:46.217 Protection Information Capabilities: 00:07:46.217 16b Guard Protection Information Storage Tag Support: No 00:07:46.217 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.217 Storage Tag Check Read Support: No 00:07:46.217 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Namespace ID:2 00:07:46.217 Error Recovery Timeout: Unlimited 00:07:46.217 Command Set Identifier: NVM (00h) 00:07:46.217 Deallocate: Supported 00:07:46.217 Deallocated/Unwritten Error: Supported 00:07:46.217 Deallocated Read Value: All 0x00 00:07:46.217 Deallocate in Write Zeroes: Not Supported 00:07:46.217 Deallocated Guard Field: 0xFFFF 00:07:46.217 Flush: Supported 00:07:46.217 Reservation: Not Supported 00:07:46.217 Namespace Sharing Capabilities: Private 00:07:46.217 Size (in LBAs): 1048576 (4GiB) 00:07:46.217 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.217 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.217 Thin Provisioning: Not Supported 00:07:46.217 Per-NS Atomic Units: No 00:07:46.217 Maximum Single Source Range Length: 128 00:07:46.217 Maximum Copy Length: 128 00:07:46.217 Maximum Source Range Count: 128 00:07:46.217 NGUID/EUI64 Never Reused: No 00:07:46.217 Namespace Write Protected: No 00:07:46.217 Number of LBA Formats: 8 00:07:46.217 Current LBA Format: LBA Format #04 00:07:46.217 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.217 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.217 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.217 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.217 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.217 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.217 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.217 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.217 00:07:46.217 NVM Specific Namespace Data 00:07:46.217 =========================== 00:07:46.217 Logical Block Storage Tag Mask: 0 00:07:46.217 Protection Information Capabilities: 00:07:46.217 16b Guard Protection Information Storage Tag Support: No 00:07:46.217 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.217 Storage Tag Check Read Support: No 00:07:46.217 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Namespace ID:3 00:07:46.217 Error Recovery Timeout: Unlimited 00:07:46.217 Command Set Identifier: NVM (00h) 00:07:46.217 Deallocate: Supported 00:07:46.217 Deallocated/Unwritten Error: Supported 00:07:46.217 Deallocated Read Value: All 0x00 00:07:46.217 Deallocate in Write Zeroes: Not Supported 00:07:46.217 Deallocated Guard Field: 0xFFFF 00:07:46.217 Flush: Supported 00:07:46.217 Reservation: Not Supported 00:07:46.217 Namespace Sharing Capabilities: Private 00:07:46.217 Size (in LBAs): 1048576 (4GiB) 00:07:46.217 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.217 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.217 Thin Provisioning: Not Supported 00:07:46.217 Per-NS Atomic Units: No 00:07:46.217 Maximum Single Source Range Length: 128 00:07:46.217 Maximum Copy Length: 128 00:07:46.217 Maximum Source Range Count: 128 00:07:46.217 NGUID/EUI64 Never Reused: No 00:07:46.217 Namespace Write Protected: No 00:07:46.217 Number of LBA Formats: 8 00:07:46.217 Current LBA Format: LBA Format #04 00:07:46.217 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.217 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.217 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.217 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.217 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.217 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.217 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.217 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.217 00:07:46.217 NVM Specific Namespace Data 00:07:46.217 =========================== 00:07:46.217 Logical Block Storage Tag Mask: 0 00:07:46.217 Protection Information Capabilities: 00:07:46.217 16b Guard Protection Information Storage Tag Support: No 00:07:46.217 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.217 Storage Tag Check Read Support: No 00:07:46.217 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.217 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.217 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:46.477 ===================================================== 00:07:46.477 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.477 ===================================================== 00:07:46.477 Controller Capabilities/Features 00:07:46.477 ================================ 00:07:46.477 Vendor ID: 1b36 00:07:46.477 Subsystem Vendor ID: 1af4 00:07:46.478 Serial Number: 12340 00:07:46.478 Model Number: QEMU NVMe Ctrl 00:07:46.478 Firmware Version: 8.0.0 00:07:46.478 Recommended Arb Burst: 6 00:07:46.478 IEEE OUI Identifier: 00 54 52 00:07:46.478 Multi-path I/O 00:07:46.478 May have multiple subsystem ports: No 00:07:46.478 May have multiple controllers: No 00:07:46.478 Associated with SR-IOV VF: No 00:07:46.478 Max Data Transfer Size: 524288 00:07:46.478 Max Number of Namespaces: 256 00:07:46.478 Max Number of I/O Queues: 64 00:07:46.478 NVMe Specification Version (VS): 1.4 00:07:46.478 NVMe Specification Version (Identify): 1.4 00:07:46.478 Maximum Queue Entries: 2048 00:07:46.478 Contiguous Queues Required: Yes 00:07:46.478 Arbitration Mechanisms Supported 00:07:46.478 Weighted Round Robin: Not Supported 00:07:46.478 Vendor Specific: Not Supported 00:07:46.478 Reset Timeout: 7500 ms 00:07:46.478 Doorbell Stride: 4 bytes 00:07:46.478 NVM Subsystem Reset: Not Supported 00:07:46.478 Command Sets Supported 00:07:46.478 NVM Command Set: Supported 00:07:46.478 Boot Partition: Not Supported 00:07:46.478 Memory Page Size Minimum: 4096 bytes 00:07:46.478 Memory Page Size Maximum: 65536 bytes 00:07:46.478 Persistent Memory Region: Not Supported 00:07:46.478 Optional Asynchronous Events Supported 00:07:46.478 Namespace Attribute Notices: Supported 00:07:46.478 Firmware Activation Notices: Not Supported 00:07:46.478 ANA Change Notices: Not Supported 00:07:46.478 PLE Aggregate Log Change Notices: Not Supported 00:07:46.478 LBA Status Info Alert Notices: Not Supported 00:07:46.478 EGE Aggregate Log Change Notices: Not Supported 00:07:46.478 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.478 Zone Descriptor Change Notices: Not Supported 00:07:46.478 Discovery Log Change Notices: Not Supported 00:07:46.478 Controller Attributes 00:07:46.478 128-bit Host Identifier: Not Supported 00:07:46.478 Non-Operational Permissive Mode: Not Supported 00:07:46.478 NVM Sets: Not Supported 00:07:46.478 Read Recovery Levels: Not Supported 00:07:46.478 Endurance Groups: Not Supported 00:07:46.478 Predictable Latency Mode: Not Supported 00:07:46.478 Traffic Based Keep ALive: Not Supported 00:07:46.478 Namespace Granularity: Not Supported 00:07:46.478 SQ Associations: Not Supported 00:07:46.478 UUID List: Not Supported 00:07:46.478 Multi-Domain Subsystem: Not Supported 00:07:46.478 Fixed Capacity Management: Not Supported 00:07:46.478 Variable Capacity Management: Not Supported 00:07:46.478 Delete Endurance Group: Not Supported 00:07:46.478 Delete NVM Set: Not Supported 00:07:46.478 Extended LBA Formats Supported: Supported 00:07:46.478 Flexible Data Placement Supported: Not Supported 00:07:46.478 00:07:46.478 Controller Memory Buffer Support 00:07:46.478 ================================ 00:07:46.478 Supported: No 00:07:46.478 00:07:46.478 Persistent Memory Region Support 00:07:46.478 ================================ 00:07:46.478 Supported: No 00:07:46.478 00:07:46.478 Admin Command Set Attributes 00:07:46.478 ============================ 00:07:46.478 Security Send/Receive: Not Supported 00:07:46.478 Format NVM: Supported 00:07:46.478 Firmware Activate/Download: Not Supported 00:07:46.478 Namespace Management: Supported 00:07:46.478 Device Self-Test: Not Supported 00:07:46.478 Directives: Supported 00:07:46.478 NVMe-MI: Not Supported 00:07:46.478 Virtualization Management: Not Supported 00:07:46.478 Doorbell Buffer Config: Supported 00:07:46.478 Get LBA Status Capability: Not Supported 00:07:46.478 Command & Feature Lockdown Capability: Not Supported 00:07:46.478 Abort Command Limit: 4 00:07:46.478 Async Event Request Limit: 4 00:07:46.478 Number of Firmware Slots: N/A 00:07:46.478 Firmware Slot 1 Read-Only: N/A 00:07:46.478 Firmware Activation Without Reset: N/A 00:07:46.478 Multiple Update Detection Support: N/A 00:07:46.478 Firmware Update Granularity: No Information Provided 00:07:46.478 Per-Namespace SMART Log: Yes 00:07:46.478 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.478 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:46.478 Command Effects Log Page: Supported 00:07:46.478 Get Log Page Extended Data: Supported 00:07:46.478 Telemetry Log Pages: Not Supported 00:07:46.478 Persistent Event Log Pages: Not Supported 00:07:46.478 Supported Log Pages Log Page: May Support 00:07:46.478 Commands Supported & Effects Log Page: Not Supported 00:07:46.478 Feature Identifiers & Effects Log Page:May Support 00:07:46.478 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.478 Data Area 4 for Telemetry Log: Not Supported 00:07:46.478 Error Log Page Entries Supported: 1 00:07:46.478 Keep Alive: Not Supported 00:07:46.478 00:07:46.478 NVM Command Set Attributes 00:07:46.478 ========================== 00:07:46.478 Submission Queue Entry Size 00:07:46.478 Max: 64 00:07:46.478 Min: 64 00:07:46.478 Completion Queue Entry Size 00:07:46.478 Max: 16 00:07:46.478 Min: 16 00:07:46.478 Number of Namespaces: 256 00:07:46.478 Compare Command: Supported 00:07:46.478 Write Uncorrectable Command: Not Supported 00:07:46.478 Dataset Management Command: Supported 00:07:46.478 Write Zeroes Command: Supported 00:07:46.478 Set Features Save Field: Supported 00:07:46.478 Reservations: Not Supported 00:07:46.478 Timestamp: Supported 00:07:46.478 Copy: Supported 00:07:46.478 Volatile Write Cache: Present 00:07:46.478 Atomic Write Unit (Normal): 1 00:07:46.478 Atomic Write Unit (PFail): 1 00:07:46.478 Atomic Compare & Write Unit: 1 00:07:46.478 Fused Compare & Write: Not Supported 00:07:46.478 Scatter-Gather List 00:07:46.478 SGL Command Set: Supported 00:07:46.478 SGL Keyed: Not Supported 00:07:46.478 SGL Bit Bucket Descriptor: Not Supported 00:07:46.478 SGL Metadata Pointer: Not Supported 00:07:46.478 Oversized SGL: Not Supported 00:07:46.478 SGL Metadata Address: Not Supported 00:07:46.478 SGL Offset: Not Supported 00:07:46.478 Transport SGL Data Block: Not Supported 00:07:46.478 Replay Protected Memory Block: Not Supported 00:07:46.478 00:07:46.478 Firmware Slot Information 00:07:46.478 ========================= 00:07:46.478 Active slot: 1 00:07:46.478 Slot 1 Firmware Revision: 1.0 00:07:46.478 00:07:46.478 00:07:46.478 Commands Supported and Effects 00:07:46.478 ============================== 00:07:46.478 Admin Commands 00:07:46.478 -------------- 00:07:46.478 Delete I/O Submission Queue (00h): Supported 00:07:46.478 Create I/O Submission Queue (01h): Supported 00:07:46.478 Get Log Page (02h): Supported 00:07:46.478 Delete I/O Completion Queue (04h): Supported 00:07:46.478 Create I/O Completion Queue (05h): Supported 00:07:46.478 Identify (06h): Supported 00:07:46.478 Abort (08h): Supported 00:07:46.478 Set Features (09h): Supported 00:07:46.478 Get Features (0Ah): Supported 00:07:46.478 Asynchronous Event Request (0Ch): Supported 00:07:46.478 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.478 Directive Send (19h): Supported 00:07:46.478 Directive Receive (1Ah): Supported 00:07:46.478 Virtualization Management (1Ch): Supported 00:07:46.478 Doorbell Buffer Config (7Ch): Supported 00:07:46.478 Format NVM (80h): Supported LBA-Change 00:07:46.478 I/O Commands 00:07:46.478 ------------ 00:07:46.478 Flush (00h): Supported LBA-Change 00:07:46.478 Write (01h): Supported LBA-Change 00:07:46.478 Read (02h): Supported 00:07:46.478 Compare (05h): Supported 00:07:46.478 Write Zeroes (08h): Supported LBA-Change 00:07:46.478 Dataset Management (09h): Supported LBA-Change 00:07:46.478 Unknown (0Ch): Supported 00:07:46.478 Unknown (12h): Supported 00:07:46.478 Copy (19h): Supported LBA-Change 00:07:46.478 Unknown (1Dh): Supported LBA-Change 00:07:46.478 00:07:46.478 Error Log 00:07:46.478 ========= 00:07:46.478 00:07:46.478 Arbitration 00:07:46.478 =========== 00:07:46.478 Arbitration Burst: no limit 00:07:46.478 00:07:46.478 Power Management 00:07:46.478 ================ 00:07:46.478 Number of Power States: 1 00:07:46.478 Current Power State: Power State #0 00:07:46.478 Power State #0: 00:07:46.478 Max Power: 25.00 W 00:07:46.478 Non-Operational State: Operational 00:07:46.478 Entry Latency: 16 microseconds 00:07:46.478 Exit Latency: 4 microseconds 00:07:46.478 Relative Read Throughput: 0 00:07:46.478 Relative Read Latency: 0 00:07:46.478 Relative Write Throughput: 0 00:07:46.478 Relative Write Latency: 0 00:07:46.478 Idle Power: Not Reported 00:07:46.478 Active Power: Not Reported 00:07:46.478 Non-Operational Permissive Mode: Not Supported 00:07:46.478 00:07:46.478 Health Information 00:07:46.478 ================== 00:07:46.478 Critical Warnings: 00:07:46.479 Available Spare Space: OK 00:07:46.479 Temperature: OK 00:07:46.479 Device Reliability: OK 00:07:46.479 Read Only: No 00:07:46.479 Volatile Memory Backup: OK 00:07:46.479 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.479 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.479 Available Spare: 0% 00:07:46.479 Available Spare Threshold: 0% 00:07:46.479 Life Percentage Used: 0% 00:07:46.479 Data Units Read: 641 00:07:46.479 Data Units Written: 569 00:07:46.479 Host Read Commands: 34905 00:07:46.479 Host Write Commands: 34691 00:07:46.479 Controller Busy Time: 0 minutes 00:07:46.479 Power Cycles: 0 00:07:46.479 Power On Hours: 0 hours 00:07:46.479 Unsafe Shutdowns: 0 00:07:46.479 Unrecoverable Media Errors: 0 00:07:46.479 Lifetime Error Log Entries: 0 00:07:46.479 Warning Temperature Time: 0 minutes 00:07:46.479 Critical Temperature Time: 0 minutes 00:07:46.479 00:07:46.479 Number of Queues 00:07:46.479 ================ 00:07:46.479 Number of I/O Submission Queues: 64 00:07:46.479 Number of I/O Completion Queues: 64 00:07:46.479 00:07:46.479 ZNS Specific Controller Data 00:07:46.479 ============================ 00:07:46.479 Zone Append Size Limit: 0 00:07:46.479 00:07:46.479 00:07:46.479 Active Namespaces 00:07:46.479 ================= 00:07:46.479 Namespace ID:1 00:07:46.479 Error Recovery Timeout: Unlimited 00:07:46.479 Command Set Identifier: NVM (00h) 00:07:46.479 Deallocate: Supported 00:07:46.479 Deallocated/Unwritten Error: Supported 00:07:46.479 Deallocated Read Value: All 0x00 00:07:46.479 Deallocate in Write Zeroes: Not Supported 00:07:46.479 Deallocated Guard Field: 0xFFFF 00:07:46.479 Flush: Supported 00:07:46.479 Reservation: Not Supported 00:07:46.479 Metadata Transferred as: Separate Metadata Buffer 00:07:46.479 Namespace Sharing Capabilities: Private 00:07:46.479 Size (in LBAs): 1548666 (5GiB) 00:07:46.479 Capacity (in LBAs): 1548666 (5GiB) 00:07:46.479 Utilization (in LBAs): 1548666 (5GiB) 00:07:46.479 Thin Provisioning: Not Supported 00:07:46.479 Per-NS Atomic Units: No 00:07:46.479 Maximum Single Source Range Length: 128 00:07:46.479 Maximum Copy Length: 128 00:07:46.479 Maximum Source Range Count: 128 00:07:46.479 NGUID/EUI64 Never Reused: No 00:07:46.479 Namespace Write Protected: No 00:07:46.479 Number of LBA Formats: 8 00:07:46.479 Current LBA Format: LBA Format #07 00:07:46.479 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.479 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.479 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.479 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.479 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.479 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.479 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.479 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.479 00:07:46.479 NVM Specific Namespace Data 00:07:46.479 =========================== 00:07:46.479 Logical Block Storage Tag Mask: 0 00:07:46.479 Protection Information Capabilities: 00:07:46.479 16b Guard Protection Information Storage Tag Support: No 00:07:46.479 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.479 Storage Tag Check Read Support: No 00:07:46.479 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.479 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.479 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:46.738 ===================================================== 00:07:46.738 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.738 ===================================================== 00:07:46.738 Controller Capabilities/Features 00:07:46.738 ================================ 00:07:46.738 Vendor ID: 1b36 00:07:46.738 Subsystem Vendor ID: 1af4 00:07:46.738 Serial Number: 12341 00:07:46.738 Model Number: QEMU NVMe Ctrl 00:07:46.738 Firmware Version: 8.0.0 00:07:46.738 Recommended Arb Burst: 6 00:07:46.738 IEEE OUI Identifier: 00 54 52 00:07:46.738 Multi-path I/O 00:07:46.738 May have multiple subsystem ports: No 00:07:46.738 May have multiple controllers: No 00:07:46.738 Associated with SR-IOV VF: No 00:07:46.738 Max Data Transfer Size: 524288 00:07:46.738 Max Number of Namespaces: 256 00:07:46.738 Max Number of I/O Queues: 64 00:07:46.738 NVMe Specification Version (VS): 1.4 00:07:46.738 NVMe Specification Version (Identify): 1.4 00:07:46.738 Maximum Queue Entries: 2048 00:07:46.738 Contiguous Queues Required: Yes 00:07:46.738 Arbitration Mechanisms Supported 00:07:46.738 Weighted Round Robin: Not Supported 00:07:46.738 Vendor Specific: Not Supported 00:07:46.738 Reset Timeout: 7500 ms 00:07:46.738 Doorbell Stride: 4 bytes 00:07:46.738 NVM Subsystem Reset: Not Supported 00:07:46.738 Command Sets Supported 00:07:46.738 NVM Command Set: Supported 00:07:46.738 Boot Partition: Not Supported 00:07:46.738 Memory Page Size Minimum: 4096 bytes 00:07:46.738 Memory Page Size Maximum: 65536 bytes 00:07:46.738 Persistent Memory Region: Not Supported 00:07:46.738 Optional Asynchronous Events Supported 00:07:46.738 Namespace Attribute Notices: Supported 00:07:46.738 Firmware Activation Notices: Not Supported 00:07:46.738 ANA Change Notices: Not Supported 00:07:46.738 PLE Aggregate Log Change Notices: Not Supported 00:07:46.738 LBA Status Info Alert Notices: Not Supported 00:07:46.738 EGE Aggregate Log Change Notices: Not Supported 00:07:46.738 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.738 Zone Descriptor Change Notices: Not Supported 00:07:46.738 Discovery Log Change Notices: Not Supported 00:07:46.738 Controller Attributes 00:07:46.738 128-bit Host Identifier: Not Supported 00:07:46.738 Non-Operational Permissive Mode: Not Supported 00:07:46.738 NVM Sets: Not Supported 00:07:46.738 Read Recovery Levels: Not Supported 00:07:46.738 Endurance Groups: Not Supported 00:07:46.738 Predictable Latency Mode: Not Supported 00:07:46.738 Traffic Based Keep ALive: Not Supported 00:07:46.738 Namespace Granularity: Not Supported 00:07:46.738 SQ Associations: Not Supported 00:07:46.738 UUID List: Not Supported 00:07:46.738 Multi-Domain Subsystem: Not Supported 00:07:46.738 Fixed Capacity Management: Not Supported 00:07:46.738 Variable Capacity Management: Not Supported 00:07:46.738 Delete Endurance Group: Not Supported 00:07:46.738 Delete NVM Set: Not Supported 00:07:46.738 Extended LBA Formats Supported: Supported 00:07:46.738 Flexible Data Placement Supported: Not Supported 00:07:46.738 00:07:46.738 Controller Memory Buffer Support 00:07:46.738 ================================ 00:07:46.738 Supported: No 00:07:46.738 00:07:46.738 Persistent Memory Region Support 00:07:46.738 ================================ 00:07:46.738 Supported: No 00:07:46.738 00:07:46.738 Admin Command Set Attributes 00:07:46.738 ============================ 00:07:46.738 Security Send/Receive: Not Supported 00:07:46.738 Format NVM: Supported 00:07:46.738 Firmware Activate/Download: Not Supported 00:07:46.738 Namespace Management: Supported 00:07:46.738 Device Self-Test: Not Supported 00:07:46.738 Directives: Supported 00:07:46.738 NVMe-MI: Not Supported 00:07:46.738 Virtualization Management: Not Supported 00:07:46.738 Doorbell Buffer Config: Supported 00:07:46.738 Get LBA Status Capability: Not Supported 00:07:46.738 Command & Feature Lockdown Capability: Not Supported 00:07:46.738 Abort Command Limit: 4 00:07:46.738 Async Event Request Limit: 4 00:07:46.738 Number of Firmware Slots: N/A 00:07:46.738 Firmware Slot 1 Read-Only: N/A 00:07:46.738 Firmware Activation Without Reset: N/A 00:07:46.738 Multiple Update Detection Support: N/A 00:07:46.738 Firmware Update Granularity: No Information Provided 00:07:46.738 Per-Namespace SMART Log: Yes 00:07:46.738 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.738 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:46.738 Command Effects Log Page: Supported 00:07:46.738 Get Log Page Extended Data: Supported 00:07:46.738 Telemetry Log Pages: Not Supported 00:07:46.738 Persistent Event Log Pages: Not Supported 00:07:46.738 Supported Log Pages Log Page: May Support 00:07:46.738 Commands Supported & Effects Log Page: Not Supported 00:07:46.738 Feature Identifiers & Effects Log Page:May Support 00:07:46.738 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.738 Data Area 4 for Telemetry Log: Not Supported 00:07:46.738 Error Log Page Entries Supported: 1 00:07:46.738 Keep Alive: Not Supported 00:07:46.738 00:07:46.738 NVM Command Set Attributes 00:07:46.738 ========================== 00:07:46.738 Submission Queue Entry Size 00:07:46.738 Max: 64 00:07:46.738 Min: 64 00:07:46.738 Completion Queue Entry Size 00:07:46.738 Max: 16 00:07:46.738 Min: 16 00:07:46.738 Number of Namespaces: 256 00:07:46.738 Compare Command: Supported 00:07:46.738 Write Uncorrectable Command: Not Supported 00:07:46.738 Dataset Management Command: Supported 00:07:46.738 Write Zeroes Command: Supported 00:07:46.738 Set Features Save Field: Supported 00:07:46.738 Reservations: Not Supported 00:07:46.738 Timestamp: Supported 00:07:46.738 Copy: Supported 00:07:46.738 Volatile Write Cache: Present 00:07:46.739 Atomic Write Unit (Normal): 1 00:07:46.739 Atomic Write Unit (PFail): 1 00:07:46.739 Atomic Compare & Write Unit: 1 00:07:46.739 Fused Compare & Write: Not Supported 00:07:46.739 Scatter-Gather List 00:07:46.739 SGL Command Set: Supported 00:07:46.739 SGL Keyed: Not Supported 00:07:46.739 SGL Bit Bucket Descriptor: Not Supported 00:07:46.739 SGL Metadata Pointer: Not Supported 00:07:46.739 Oversized SGL: Not Supported 00:07:46.739 SGL Metadata Address: Not Supported 00:07:46.739 SGL Offset: Not Supported 00:07:46.739 Transport SGL Data Block: Not Supported 00:07:46.739 Replay Protected Memory Block: Not Supported 00:07:46.739 00:07:46.739 Firmware Slot Information 00:07:46.739 ========================= 00:07:46.739 Active slot: 1 00:07:46.739 Slot 1 Firmware Revision: 1.0 00:07:46.739 00:07:46.739 00:07:46.739 Commands Supported and Effects 00:07:46.739 ============================== 00:07:46.739 Admin Commands 00:07:46.739 -------------- 00:07:46.739 Delete I/O Submission Queue (00h): Supported 00:07:46.739 Create I/O Submission Queue (01h): Supported 00:07:46.739 Get Log Page (02h): Supported 00:07:46.739 Delete I/O Completion Queue (04h): Supported 00:07:46.739 Create I/O Completion Queue (05h): Supported 00:07:46.739 Identify (06h): Supported 00:07:46.739 Abort (08h): Supported 00:07:46.739 Set Features (09h): Supported 00:07:46.739 Get Features (0Ah): Supported 00:07:46.739 Asynchronous Event Request (0Ch): Supported 00:07:46.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.739 Directive Send (19h): Supported 00:07:46.739 Directive Receive (1Ah): Supported 00:07:46.739 Virtualization Management (1Ch): Supported 00:07:46.739 Doorbell Buffer Config (7Ch): Supported 00:07:46.739 Format NVM (80h): Supported LBA-Change 00:07:46.739 I/O Commands 00:07:46.739 ------------ 00:07:46.739 Flush (00h): Supported LBA-Change 00:07:46.739 Write (01h): Supported LBA-Change 00:07:46.739 Read (02h): Supported 00:07:46.739 Compare (05h): Supported 00:07:46.739 Write Zeroes (08h): Supported LBA-Change 00:07:46.739 Dataset Management (09h): Supported LBA-Change 00:07:46.739 Unknown (0Ch): Supported 00:07:46.739 Unknown (12h): Supported 00:07:46.739 Copy (19h): Supported LBA-Change 00:07:46.739 Unknown (1Dh): Supported LBA-Change 00:07:46.739 00:07:46.739 Error Log 00:07:46.739 ========= 00:07:46.739 00:07:46.739 Arbitration 00:07:46.739 =========== 00:07:46.739 Arbitration Burst: no limit 00:07:46.739 00:07:46.739 Power Management 00:07:46.739 ================ 00:07:46.739 Number of Power States: 1 00:07:46.739 Current Power State: Power State #0 00:07:46.739 Power State #0: 00:07:46.739 Max Power: 25.00 W 00:07:46.739 Non-Operational State: Operational 00:07:46.739 Entry Latency: 16 microseconds 00:07:46.739 Exit Latency: 4 microseconds 00:07:46.739 Relative Read Throughput: 0 00:07:46.739 Relative Read Latency: 0 00:07:46.739 Relative Write Throughput: 0 00:07:46.739 Relative Write Latency: 0 00:07:46.739 Idle Power: Not Reported 00:07:46.739 Active Power: Not Reported 00:07:46.739 Non-Operational Permissive Mode: Not Supported 00:07:46.739 00:07:46.739 Health Information 00:07:46.739 ================== 00:07:46.739 Critical Warnings: 00:07:46.739 Available Spare Space: OK 00:07:46.739 Temperature: OK 00:07:46.739 Device Reliability: OK 00:07:46.739 Read Only: No 00:07:46.739 Volatile Memory Backup: OK 00:07:46.739 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.739 Available Spare: 0% 00:07:46.739 Available Spare Threshold: 0% 00:07:46.739 Life Percentage Used: 0% 00:07:46.739 Data Units Read: 1040 00:07:46.739 Data Units Written: 906 00:07:46.739 Host Read Commands: 53479 00:07:46.739 Host Write Commands: 52270 00:07:46.739 Controller Busy Time: 0 minutes 00:07:46.739 Power Cycles: 0 00:07:46.739 Power On Hours: 0 hours 00:07:46.739 Unsafe Shutdowns: 0 00:07:46.739 Unrecoverable Media Errors: 0 00:07:46.739 Lifetime Error Log Entries: 0 00:07:46.739 Warning Temperature Time: 0 minutes 00:07:46.739 Critical Temperature Time: 0 minutes 00:07:46.739 00:07:46.739 Number of Queues 00:07:46.739 ================ 00:07:46.739 Number of I/O Submission Queues: 64 00:07:46.739 Number of I/O Completion Queues: 64 00:07:46.739 00:07:46.739 ZNS Specific Controller Data 00:07:46.739 ============================ 00:07:46.739 Zone Append Size Limit: 0 00:07:46.739 00:07:46.739 00:07:46.739 Active Namespaces 00:07:46.739 ================= 00:07:46.739 Namespace ID:1 00:07:46.739 Error Recovery Timeout: Unlimited 00:07:46.739 Command Set Identifier: NVM (00h) 00:07:46.739 Deallocate: Supported 00:07:46.739 Deallocated/Unwritten Error: Supported 00:07:46.739 Deallocated Read Value: All 0x00 00:07:46.739 Deallocate in Write Zeroes: Not Supported 00:07:46.739 Deallocated Guard Field: 0xFFFF 00:07:46.739 Flush: Supported 00:07:46.739 Reservation: Not Supported 00:07:46.739 Namespace Sharing Capabilities: Private 00:07:46.739 Size (in LBAs): 1310720 (5GiB) 00:07:46.739 Capacity (in LBAs): 1310720 (5GiB) 00:07:46.739 Utilization (in LBAs): 1310720 (5GiB) 00:07:46.739 Thin Provisioning: Not Supported 00:07:46.739 Per-NS Atomic Units: No 00:07:46.739 Maximum Single Source Range Length: 128 00:07:46.739 Maximum Copy Length: 128 00:07:46.739 Maximum Source Range Count: 128 00:07:46.739 NGUID/EUI64 Never Reused: No 00:07:46.739 Namespace Write Protected: No 00:07:46.739 Number of LBA Formats: 8 00:07:46.739 Current LBA Format: LBA Format #04 00:07:46.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.739 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.739 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.739 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.739 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.739 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.739 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.739 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.739 00:07:46.739 NVM Specific Namespace Data 00:07:46.739 =========================== 00:07:46.739 Logical Block Storage Tag Mask: 0 00:07:46.739 Protection Information Capabilities: 00:07:46.739 16b Guard Protection Information Storage Tag Support: No 00:07:46.739 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.739 Storage Tag Check Read Support: No 00:07:46.739 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.739 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:46.739 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:46.998 ===================================================== 00:07:46.998 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.998 ===================================================== 00:07:46.998 Controller Capabilities/Features 00:07:46.998 ================================ 00:07:46.998 Vendor ID: 1b36 00:07:46.998 Subsystem Vendor ID: 1af4 00:07:46.998 Serial Number: 12342 00:07:46.998 Model Number: QEMU NVMe Ctrl 00:07:46.998 Firmware Version: 8.0.0 00:07:46.998 Recommended Arb Burst: 6 00:07:46.998 IEEE OUI Identifier: 00 54 52 00:07:46.998 Multi-path I/O 00:07:46.998 May have multiple subsystem ports: No 00:07:46.998 May have multiple controllers: No 00:07:46.998 Associated with SR-IOV VF: No 00:07:46.998 Max Data Transfer Size: 524288 00:07:46.998 Max Number of Namespaces: 256 00:07:46.998 Max Number of I/O Queues: 64 00:07:46.998 NVMe Specification Version (VS): 1.4 00:07:46.998 NVMe Specification Version (Identify): 1.4 00:07:46.998 Maximum Queue Entries: 2048 00:07:46.998 Contiguous Queues Required: Yes 00:07:46.998 Arbitration Mechanisms Supported 00:07:46.998 Weighted Round Robin: Not Supported 00:07:46.998 Vendor Specific: Not Supported 00:07:46.998 Reset Timeout: 7500 ms 00:07:46.998 Doorbell Stride: 4 bytes 00:07:46.998 NVM Subsystem Reset: Not Supported 00:07:46.998 Command Sets Supported 00:07:46.998 NVM Command Set: Supported 00:07:46.998 Boot Partition: Not Supported 00:07:46.998 Memory Page Size Minimum: 4096 bytes 00:07:46.998 Memory Page Size Maximum: 65536 bytes 00:07:46.998 Persistent Memory Region: Not Supported 00:07:46.998 Optional Asynchronous Events Supported 00:07:46.998 Namespace Attribute Notices: Supported 00:07:46.998 Firmware Activation Notices: Not Supported 00:07:46.998 ANA Change Notices: Not Supported 00:07:46.998 PLE Aggregate Log Change Notices: Not Supported 00:07:46.998 LBA Status Info Alert Notices: Not Supported 00:07:46.998 EGE Aggregate Log Change Notices: Not Supported 00:07:46.998 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.998 Zone Descriptor Change Notices: Not Supported 00:07:46.998 Discovery Log Change Notices: Not Supported 00:07:46.998 Controller Attributes 00:07:46.998 128-bit Host Identifier: Not Supported 00:07:46.998 Non-Operational Permissive Mode: Not Supported 00:07:46.998 NVM Sets: Not Supported 00:07:46.998 Read Recovery Levels: Not Supported 00:07:46.998 Endurance Groups: Not Supported 00:07:46.998 Predictable Latency Mode: Not Supported 00:07:46.998 Traffic Based Keep ALive: Not Supported 00:07:46.998 Namespace Granularity: Not Supported 00:07:46.998 SQ Associations: Not Supported 00:07:46.998 UUID List: Not Supported 00:07:46.998 Multi-Domain Subsystem: Not Supported 00:07:46.998 Fixed Capacity Management: Not Supported 00:07:46.998 Variable Capacity Management: Not Supported 00:07:46.998 Delete Endurance Group: Not Supported 00:07:46.998 Delete NVM Set: Not Supported 00:07:46.998 Extended LBA Formats Supported: Supported 00:07:46.998 Flexible Data Placement Supported: Not Supported 00:07:46.998 00:07:46.998 Controller Memory Buffer Support 00:07:46.998 ================================ 00:07:46.998 Supported: No 00:07:46.998 00:07:46.998 Persistent Memory Region Support 00:07:46.998 ================================ 00:07:46.998 Supported: No 00:07:46.998 00:07:46.998 Admin Command Set Attributes 00:07:46.998 ============================ 00:07:46.998 Security Send/Receive: Not Supported 00:07:46.998 Format NVM: Supported 00:07:46.998 Firmware Activate/Download: Not Supported 00:07:46.998 Namespace Management: Supported 00:07:46.998 Device Self-Test: Not Supported 00:07:46.998 Directives: Supported 00:07:46.998 NVMe-MI: Not Supported 00:07:46.998 Virtualization Management: Not Supported 00:07:46.998 Doorbell Buffer Config: Supported 00:07:46.998 Get LBA Status Capability: Not Supported 00:07:46.998 Command & Feature Lockdown Capability: Not Supported 00:07:46.998 Abort Command Limit: 4 00:07:46.998 Async Event Request Limit: 4 00:07:46.998 Number of Firmware Slots: N/A 00:07:46.998 Firmware Slot 1 Read-Only: N/A 00:07:46.998 Firmware Activation Without Reset: N/A 00:07:46.998 Multiple Update Detection Support: N/A 00:07:46.998 Firmware Update Granularity: No Information Provided 00:07:46.998 Per-Namespace SMART Log: Yes 00:07:46.998 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.998 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:46.998 Command Effects Log Page: Supported 00:07:46.998 Get Log Page Extended Data: Supported 00:07:46.998 Telemetry Log Pages: Not Supported 00:07:46.998 Persistent Event Log Pages: Not Supported 00:07:46.998 Supported Log Pages Log Page: May Support 00:07:46.998 Commands Supported & Effects Log Page: Not Supported 00:07:46.998 Feature Identifiers & Effects Log Page:May Support 00:07:46.998 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.998 Data Area 4 for Telemetry Log: Not Supported 00:07:46.998 Error Log Page Entries Supported: 1 00:07:46.998 Keep Alive: Not Supported 00:07:46.998 00:07:46.998 NVM Command Set Attributes 00:07:46.998 ========================== 00:07:46.998 Submission Queue Entry Size 00:07:46.998 Max: 64 00:07:46.998 Min: 64 00:07:46.998 Completion Queue Entry Size 00:07:46.998 Max: 16 00:07:46.998 Min: 16 00:07:46.998 Number of Namespaces: 256 00:07:46.998 Compare Command: Supported 00:07:46.998 Write Uncorrectable Command: Not Supported 00:07:46.998 Dataset Management Command: Supported 00:07:46.998 Write Zeroes Command: Supported 00:07:46.998 Set Features Save Field: Supported 00:07:46.998 Reservations: Not Supported 00:07:46.998 Timestamp: Supported 00:07:46.998 Copy: Supported 00:07:46.998 Volatile Write Cache: Present 00:07:46.998 Atomic Write Unit (Normal): 1 00:07:46.998 Atomic Write Unit (PFail): 1 00:07:46.998 Atomic Compare & Write Unit: 1 00:07:46.998 Fused Compare & Write: Not Supported 00:07:46.998 Scatter-Gather List 00:07:46.998 SGL Command Set: Supported 00:07:46.998 SGL Keyed: Not Supported 00:07:46.999 SGL Bit Bucket Descriptor: Not Supported 00:07:46.999 SGL Metadata Pointer: Not Supported 00:07:46.999 Oversized SGL: Not Supported 00:07:46.999 SGL Metadata Address: Not Supported 00:07:46.999 SGL Offset: Not Supported 00:07:46.999 Transport SGL Data Block: Not Supported 00:07:46.999 Replay Protected Memory Block: Not Supported 00:07:46.999 00:07:46.999 Firmware Slot Information 00:07:46.999 ========================= 00:07:46.999 Active slot: 1 00:07:46.999 Slot 1 Firmware Revision: 1.0 00:07:46.999 00:07:46.999 00:07:46.999 Commands Supported and Effects 00:07:46.999 ============================== 00:07:46.999 Admin Commands 00:07:46.999 -------------- 00:07:46.999 Delete I/O Submission Queue (00h): Supported 00:07:46.999 Create I/O Submission Queue (01h): Supported 00:07:46.999 Get Log Page (02h): Supported 00:07:46.999 Delete I/O Completion Queue (04h): Supported 00:07:46.999 Create I/O Completion Queue (05h): Supported 00:07:46.999 Identify (06h): Supported 00:07:46.999 Abort (08h): Supported 00:07:46.999 Set Features (09h): Supported 00:07:46.999 Get Features (0Ah): Supported 00:07:46.999 Asynchronous Event Request (0Ch): Supported 00:07:46.999 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.999 Directive Send (19h): Supported 00:07:46.999 Directive Receive (1Ah): Supported 00:07:46.999 Virtualization Management (1Ch): Supported 00:07:46.999 Doorbell Buffer Config (7Ch): Supported 00:07:46.999 Format NVM (80h): Supported LBA-Change 00:07:46.999 I/O Commands 00:07:46.999 ------------ 00:07:46.999 Flush (00h): Supported LBA-Change 00:07:46.999 Write (01h): Supported LBA-Change 00:07:46.999 Read (02h): Supported 00:07:46.999 Compare (05h): Supported 00:07:46.999 Write Zeroes (08h): Supported LBA-Change 00:07:46.999 Dataset Management (09h): Supported LBA-Change 00:07:46.999 Unknown (0Ch): Supported 00:07:46.999 Unknown (12h): Supported 00:07:46.999 Copy (19h): Supported LBA-Change 00:07:46.999 Unknown (1Dh): Supported LBA-Change 00:07:46.999 00:07:46.999 Error Log 00:07:46.999 ========= 00:07:46.999 00:07:46.999 Arbitration 00:07:46.999 =========== 00:07:46.999 Arbitration Burst: no limit 00:07:46.999 00:07:46.999 Power Management 00:07:46.999 ================ 00:07:46.999 Number of Power States: 1 00:07:46.999 Current Power State: Power State #0 00:07:46.999 Power State #0: 00:07:46.999 Max Power: 25.00 W 00:07:46.999 Non-Operational State: Operational 00:07:46.999 Entry Latency: 16 microseconds 00:07:46.999 Exit Latency: 4 microseconds 00:07:46.999 Relative Read Throughput: 0 00:07:46.999 Relative Read Latency: 0 00:07:46.999 Relative Write Throughput: 0 00:07:46.999 Relative Write Latency: 0 00:07:46.999 Idle Power: Not Reported 00:07:46.999 Active Power: Not Reported 00:07:46.999 Non-Operational Permissive Mode: Not Supported 00:07:46.999 00:07:46.999 Health Information 00:07:46.999 ================== 00:07:46.999 Critical Warnings: 00:07:46.999 Available Spare Space: OK 00:07:46.999 Temperature: OK 00:07:46.999 Device Reliability: OK 00:07:46.999 Read Only: No 00:07:46.999 Volatile Memory Backup: OK 00:07:46.999 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.999 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.999 Available Spare: 0% 00:07:46.999 Available Spare Threshold: 0% 00:07:46.999 Life Percentage Used: 0% 00:07:46.999 Data Units Read: 2160 00:07:46.999 Data Units Written: 1947 00:07:46.999 Host Read Commands: 107596 00:07:46.999 Host Write Commands: 105866 00:07:46.999 Controller Busy Time: 0 minutes 00:07:46.999 Power Cycles: 0 00:07:46.999 Power On Hours: 0 hours 00:07:46.999 Unsafe Shutdowns: 0 00:07:46.999 Unrecoverable Media Errors: 0 00:07:46.999 Lifetime Error Log Entries: 0 00:07:46.999 Warning Temperature Time: 0 minutes 00:07:46.999 Critical Temperature Time: 0 minutes 00:07:46.999 00:07:46.999 Number of Queues 00:07:46.999 ================ 00:07:46.999 Number of I/O Submission Queues: 64 00:07:46.999 Number of I/O Completion Queues: 64 00:07:46.999 00:07:46.999 ZNS Specific Controller Data 00:07:46.999 ============================ 00:07:46.999 Zone Append Size Limit: 0 00:07:46.999 00:07:46.999 00:07:46.999 Active Namespaces 00:07:46.999 ================= 00:07:46.999 Namespace ID:1 00:07:46.999 Error Recovery Timeout: Unlimited 00:07:46.999 Command Set Identifier: NVM (00h) 00:07:46.999 Deallocate: Supported 00:07:46.999 Deallocated/Unwritten Error: Supported 00:07:46.999 Deallocated Read Value: All 0x00 00:07:46.999 Deallocate in Write Zeroes: Not Supported 00:07:46.999 Deallocated Guard Field: 0xFFFF 00:07:46.999 Flush: Supported 00:07:46.999 Reservation: Not Supported 00:07:46.999 Namespace Sharing Capabilities: Private 00:07:46.999 Size (in LBAs): 1048576 (4GiB) 00:07:46.999 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.999 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.999 Thin Provisioning: Not Supported 00:07:46.999 Per-NS Atomic Units: No 00:07:46.999 Maximum Single Source Range Length: 128 00:07:46.999 Maximum Copy Length: 128 00:07:46.999 Maximum Source Range Count: 128 00:07:46.999 NGUID/EUI64 Never Reused: No 00:07:46.999 Namespace Write Protected: No 00:07:46.999 Number of LBA Formats: 8 00:07:46.999 Current LBA Format: LBA Format #04 00:07:46.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.999 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.999 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.999 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.999 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.999 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.999 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.999 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.999 00:07:46.999 NVM Specific Namespace Data 00:07:46.999 =========================== 00:07:46.999 Logical Block Storage Tag Mask: 0 00:07:46.999 Protection Information Capabilities: 00:07:46.999 16b Guard Protection Information Storage Tag Support: No 00:07:46.999 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.999 Storage Tag Check Read Support: No 00:07:46.999 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Namespace ID:2 00:07:46.999 Error Recovery Timeout: Unlimited 00:07:46.999 Command Set Identifier: NVM (00h) 00:07:46.999 Deallocate: Supported 00:07:46.999 Deallocated/Unwritten Error: Supported 00:07:46.999 Deallocated Read Value: All 0x00 00:07:46.999 Deallocate in Write Zeroes: Not Supported 00:07:46.999 Deallocated Guard Field: 0xFFFF 00:07:46.999 Flush: Supported 00:07:46.999 Reservation: Not Supported 00:07:46.999 Namespace Sharing Capabilities: Private 00:07:46.999 Size (in LBAs): 1048576 (4GiB) 00:07:46.999 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.999 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.999 Thin Provisioning: Not Supported 00:07:46.999 Per-NS Atomic Units: No 00:07:46.999 Maximum Single Source Range Length: 128 00:07:46.999 Maximum Copy Length: 128 00:07:46.999 Maximum Source Range Count: 128 00:07:46.999 NGUID/EUI64 Never Reused: No 00:07:46.999 Namespace Write Protected: No 00:07:46.999 Number of LBA Formats: 8 00:07:46.999 Current LBA Format: LBA Format #04 00:07:46.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.999 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.999 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.999 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.999 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.999 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.999 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.999 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.999 00:07:46.999 NVM Specific Namespace Data 00:07:46.999 =========================== 00:07:46.999 Logical Block Storage Tag Mask: 0 00:07:46.999 Protection Information Capabilities: 00:07:46.999 16b Guard Protection Information Storage Tag Support: No 00:07:46.999 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.999 Storage Tag Check Read Support: No 00:07:46.999 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.999 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Namespace ID:3 00:07:47.000 Error Recovery Timeout: Unlimited 00:07:47.000 Command Set Identifier: NVM (00h) 00:07:47.000 Deallocate: Supported 00:07:47.000 Deallocated/Unwritten Error: Supported 00:07:47.000 Deallocated Read Value: All 0x00 00:07:47.000 Deallocate in Write Zeroes: Not Supported 00:07:47.000 Deallocated Guard Field: 0xFFFF 00:07:47.000 Flush: Supported 00:07:47.000 Reservation: Not Supported 00:07:47.000 Namespace Sharing Capabilities: Private 00:07:47.000 Size (in LBAs): 1048576 (4GiB) 00:07:47.000 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.000 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.000 Thin Provisioning: Not Supported 00:07:47.000 Per-NS Atomic Units: No 00:07:47.000 Maximum Single Source Range Length: 128 00:07:47.000 Maximum Copy Length: 128 00:07:47.000 Maximum Source Range Count: 128 00:07:47.000 NGUID/EUI64 Never Reused: No 00:07:47.000 Namespace Write Protected: No 00:07:47.000 Number of LBA Formats: 8 00:07:47.000 Current LBA Format: LBA Format #04 00:07:47.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.000 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.000 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.000 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.000 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.000 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.000 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.000 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.000 00:07:47.000 NVM Specific Namespace Data 00:07:47.000 =========================== 00:07:47.000 Logical Block Storage Tag Mask: 0 00:07:47.000 Protection Information Capabilities: 00:07:47.000 16b Guard Protection Information Storage Tag Support: No 00:07:47.000 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.000 Storage Tag Check Read Support: No 00:07:47.000 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.000 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.000 02:52:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:47.274 ===================================================== 00:07:47.274 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.274 ===================================================== 00:07:47.274 Controller Capabilities/Features 00:07:47.274 ================================ 00:07:47.274 Vendor ID: 1b36 00:07:47.274 Subsystem Vendor ID: 1af4 00:07:47.274 Serial Number: 12343 00:07:47.274 Model Number: QEMU NVMe Ctrl 00:07:47.274 Firmware Version: 8.0.0 00:07:47.274 Recommended Arb Burst: 6 00:07:47.274 IEEE OUI Identifier: 00 54 52 00:07:47.274 Multi-path I/O 00:07:47.274 May have multiple subsystem ports: No 00:07:47.274 May have multiple controllers: Yes 00:07:47.274 Associated with SR-IOV VF: No 00:07:47.274 Max Data Transfer Size: 524288 00:07:47.274 Max Number of Namespaces: 256 00:07:47.274 Max Number of I/O Queues: 64 00:07:47.274 NVMe Specification Version (VS): 1.4 00:07:47.274 NVMe Specification Version (Identify): 1.4 00:07:47.274 Maximum Queue Entries: 2048 00:07:47.274 Contiguous Queues Required: Yes 00:07:47.274 Arbitration Mechanisms Supported 00:07:47.274 Weighted Round Robin: Not Supported 00:07:47.274 Vendor Specific: Not Supported 00:07:47.274 Reset Timeout: 7500 ms 00:07:47.274 Doorbell Stride: 4 bytes 00:07:47.274 NVM Subsystem Reset: Not Supported 00:07:47.274 Command Sets Supported 00:07:47.274 NVM Command Set: Supported 00:07:47.274 Boot Partition: Not Supported 00:07:47.274 Memory Page Size Minimum: 4096 bytes 00:07:47.274 Memory Page Size Maximum: 65536 bytes 00:07:47.274 Persistent Memory Region: Not Supported 00:07:47.274 Optional Asynchronous Events Supported 00:07:47.274 Namespace Attribute Notices: Supported 00:07:47.274 Firmware Activation Notices: Not Supported 00:07:47.274 ANA Change Notices: Not Supported 00:07:47.274 PLE Aggregate Log Change Notices: Not Supported 00:07:47.274 LBA Status Info Alert Notices: Not Supported 00:07:47.274 EGE Aggregate Log Change Notices: Not Supported 00:07:47.274 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.274 Zone Descriptor Change Notices: Not Supported 00:07:47.274 Discovery Log Change Notices: Not Supported 00:07:47.274 Controller Attributes 00:07:47.274 128-bit Host Identifier: Not Supported 00:07:47.274 Non-Operational Permissive Mode: Not Supported 00:07:47.274 NVM Sets: Not Supported 00:07:47.274 Read Recovery Levels: Not Supported 00:07:47.274 Endurance Groups: Supported 00:07:47.274 Predictable Latency Mode: Not Supported 00:07:47.274 Traffic Based Keep ALive: Not Supported 00:07:47.274 Namespace Granularity: Not Supported 00:07:47.274 SQ Associations: Not Supported 00:07:47.274 UUID List: Not Supported 00:07:47.274 Multi-Domain Subsystem: Not Supported 00:07:47.274 Fixed Capacity Management: Not Supported 00:07:47.274 Variable Capacity Management: Not Supported 00:07:47.274 Delete Endurance Group: Not Supported 00:07:47.274 Delete NVM Set: Not Supported 00:07:47.274 Extended LBA Formats Supported: Supported 00:07:47.274 Flexible Data Placement Supported: Supported 00:07:47.274 00:07:47.274 Controller Memory Buffer Support 00:07:47.274 ================================ 00:07:47.274 Supported: No 00:07:47.274 00:07:47.274 Persistent Memory Region Support 00:07:47.274 ================================ 00:07:47.274 Supported: No 00:07:47.274 00:07:47.274 Admin Command Set Attributes 00:07:47.274 ============================ 00:07:47.274 Security Send/Receive: Not Supported 00:07:47.274 Format NVM: Supported 00:07:47.274 Firmware Activate/Download: Not Supported 00:07:47.274 Namespace Management: Supported 00:07:47.274 Device Self-Test: Not Supported 00:07:47.274 Directives: Supported 00:07:47.274 NVMe-MI: Not Supported 00:07:47.274 Virtualization Management: Not Supported 00:07:47.274 Doorbell Buffer Config: Supported 00:07:47.274 Get LBA Status Capability: Not Supported 00:07:47.274 Command & Feature Lockdown Capability: Not Supported 00:07:47.274 Abort Command Limit: 4 00:07:47.274 Async Event Request Limit: 4 00:07:47.274 Number of Firmware Slots: N/A 00:07:47.274 Firmware Slot 1 Read-Only: N/A 00:07:47.274 Firmware Activation Without Reset: N/A 00:07:47.274 Multiple Update Detection Support: N/A 00:07:47.274 Firmware Update Granularity: No Information Provided 00:07:47.274 Per-Namespace SMART Log: Yes 00:07:47.274 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.274 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:47.274 Command Effects Log Page: Supported 00:07:47.274 Get Log Page Extended Data: Supported 00:07:47.274 Telemetry Log Pages: Not Supported 00:07:47.274 Persistent Event Log Pages: Not Supported 00:07:47.274 Supported Log Pages Log Page: May Support 00:07:47.274 Commands Supported & Effects Log Page: Not Supported 00:07:47.274 Feature Identifiers & Effects Log Page:May Support 00:07:47.274 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.274 Data Area 4 for Telemetry Log: Not Supported 00:07:47.274 Error Log Page Entries Supported: 1 00:07:47.274 Keep Alive: Not Supported 00:07:47.274 00:07:47.274 NVM Command Set Attributes 00:07:47.274 ========================== 00:07:47.274 Submission Queue Entry Size 00:07:47.274 Max: 64 00:07:47.274 Min: 64 00:07:47.274 Completion Queue Entry Size 00:07:47.274 Max: 16 00:07:47.274 Min: 16 00:07:47.274 Number of Namespaces: 256 00:07:47.274 Compare Command: Supported 00:07:47.274 Write Uncorrectable Command: Not Supported 00:07:47.274 Dataset Management Command: Supported 00:07:47.274 Write Zeroes Command: Supported 00:07:47.274 Set Features Save Field: Supported 00:07:47.274 Reservations: Not Supported 00:07:47.274 Timestamp: Supported 00:07:47.274 Copy: Supported 00:07:47.274 Volatile Write Cache: Present 00:07:47.274 Atomic Write Unit (Normal): 1 00:07:47.274 Atomic Write Unit (PFail): 1 00:07:47.274 Atomic Compare & Write Unit: 1 00:07:47.274 Fused Compare & Write: Not Supported 00:07:47.274 Scatter-Gather List 00:07:47.274 SGL Command Set: Supported 00:07:47.274 SGL Keyed: Not Supported 00:07:47.274 SGL Bit Bucket Descriptor: Not Supported 00:07:47.274 SGL Metadata Pointer: Not Supported 00:07:47.274 Oversized SGL: Not Supported 00:07:47.274 SGL Metadata Address: Not Supported 00:07:47.274 SGL Offset: Not Supported 00:07:47.274 Transport SGL Data Block: Not Supported 00:07:47.274 Replay Protected Memory Block: Not Supported 00:07:47.274 00:07:47.274 Firmware Slot Information 00:07:47.274 ========================= 00:07:47.274 Active slot: 1 00:07:47.274 Slot 1 Firmware Revision: 1.0 00:07:47.274 00:07:47.274 00:07:47.274 Commands Supported and Effects 00:07:47.274 ============================== 00:07:47.274 Admin Commands 00:07:47.274 -------------- 00:07:47.275 Delete I/O Submission Queue (00h): Supported 00:07:47.275 Create I/O Submission Queue (01h): Supported 00:07:47.275 Get Log Page (02h): Supported 00:07:47.275 Delete I/O Completion Queue (04h): Supported 00:07:47.275 Create I/O Completion Queue (05h): Supported 00:07:47.275 Identify (06h): Supported 00:07:47.275 Abort (08h): Supported 00:07:47.275 Set Features (09h): Supported 00:07:47.275 Get Features (0Ah): Supported 00:07:47.275 Asynchronous Event Request (0Ch): Supported 00:07:47.275 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.275 Directive Send (19h): Supported 00:07:47.275 Directive Receive (1Ah): Supported 00:07:47.275 Virtualization Management (1Ch): Supported 00:07:47.275 Doorbell Buffer Config (7Ch): Supported 00:07:47.275 Format NVM (80h): Supported LBA-Change 00:07:47.275 I/O Commands 00:07:47.275 ------------ 00:07:47.275 Flush (00h): Supported LBA-Change 00:07:47.275 Write (01h): Supported LBA-Change 00:07:47.275 Read (02h): Supported 00:07:47.275 Compare (05h): Supported 00:07:47.275 Write Zeroes (08h): Supported LBA-Change 00:07:47.275 Dataset Management (09h): Supported LBA-Change 00:07:47.275 Unknown (0Ch): Supported 00:07:47.275 Unknown (12h): Supported 00:07:47.275 Copy (19h): Supported LBA-Change 00:07:47.275 Unknown (1Dh): Supported LBA-Change 00:07:47.275 00:07:47.275 Error Log 00:07:47.275 ========= 00:07:47.275 00:07:47.275 Arbitration 00:07:47.275 =========== 00:07:47.275 Arbitration Burst: no limit 00:07:47.275 00:07:47.275 Power Management 00:07:47.275 ================ 00:07:47.275 Number of Power States: 1 00:07:47.275 Current Power State: Power State #0 00:07:47.275 Power State #0: 00:07:47.275 Max Power: 25.00 W 00:07:47.275 Non-Operational State: Operational 00:07:47.275 Entry Latency: 16 microseconds 00:07:47.275 Exit Latency: 4 microseconds 00:07:47.275 Relative Read Throughput: 0 00:07:47.275 Relative Read Latency: 0 00:07:47.275 Relative Write Throughput: 0 00:07:47.275 Relative Write Latency: 0 00:07:47.275 Idle Power: Not Reported 00:07:47.275 Active Power: Not Reported 00:07:47.275 Non-Operational Permissive Mode: Not Supported 00:07:47.275 00:07:47.275 Health Information 00:07:47.275 ================== 00:07:47.275 Critical Warnings: 00:07:47.275 Available Spare Space: OK 00:07:47.275 Temperature: OK 00:07:47.275 Device Reliability: OK 00:07:47.275 Read Only: No 00:07:47.275 Volatile Memory Backup: OK 00:07:47.275 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.275 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.275 Available Spare: 0% 00:07:47.275 Available Spare Threshold: 0% 00:07:47.275 Life Percentage Used: 0% 00:07:47.275 Data Units Read: 827 00:07:47.275 Data Units Written: 756 00:07:47.275 Host Read Commands: 36837 00:07:47.275 Host Write Commands: 36261 00:07:47.275 Controller Busy Time: 0 minutes 00:07:47.275 Power Cycles: 0 00:07:47.275 Power On Hours: 0 hours 00:07:47.275 Unsafe Shutdowns: 0 00:07:47.275 Unrecoverable Media Errors: 0 00:07:47.275 Lifetime Error Log Entries: 0 00:07:47.275 Warning Temperature Time: 0 minutes 00:07:47.275 Critical Temperature Time: 0 minutes 00:07:47.275 00:07:47.275 Number of Queues 00:07:47.275 ================ 00:07:47.275 Number of I/O Submission Queues: 64 00:07:47.275 Number of I/O Completion Queues: 64 00:07:47.275 00:07:47.275 ZNS Specific Controller Data 00:07:47.275 ============================ 00:07:47.275 Zone Append Size Limit: 0 00:07:47.275 00:07:47.275 00:07:47.275 Active Namespaces 00:07:47.275 ================= 00:07:47.275 Namespace ID:1 00:07:47.275 Error Recovery Timeout: Unlimited 00:07:47.275 Command Set Identifier: NVM (00h) 00:07:47.275 Deallocate: Supported 00:07:47.275 Deallocated/Unwritten Error: Supported 00:07:47.275 Deallocated Read Value: All 0x00 00:07:47.275 Deallocate in Write Zeroes: Not Supported 00:07:47.275 Deallocated Guard Field: 0xFFFF 00:07:47.275 Flush: Supported 00:07:47.275 Reservation: Not Supported 00:07:47.275 Namespace Sharing Capabilities: Multiple Controllers 00:07:47.275 Size (in LBAs): 262144 (1GiB) 00:07:47.275 Capacity (in LBAs): 262144 (1GiB) 00:07:47.275 Utilization (in LBAs): 262144 (1GiB) 00:07:47.275 Thin Provisioning: Not Supported 00:07:47.275 Per-NS Atomic Units: No 00:07:47.275 Maximum Single Source Range Length: 128 00:07:47.275 Maximum Copy Length: 128 00:07:47.275 Maximum Source Range Count: 128 00:07:47.275 NGUID/EUI64 Never Reused: No 00:07:47.275 Namespace Write Protected: No 00:07:47.275 Endurance group ID: 1 00:07:47.275 Number of LBA Formats: 8 00:07:47.275 Current LBA Format: LBA Format #04 00:07:47.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.275 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.275 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.275 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.275 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.275 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.275 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.275 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.275 00:07:47.275 Get Feature FDP: 00:07:47.275 ================ 00:07:47.275 Enabled: Yes 00:07:47.275 FDP configuration index: 0 00:07:47.275 00:07:47.275 FDP configurations log page 00:07:47.275 =========================== 00:07:47.275 Number of FDP configurations: 1 00:07:47.275 Version: 0 00:07:47.275 Size: 112 00:07:47.275 FDP Configuration Descriptor: 0 00:07:47.275 Descriptor Size: 96 00:07:47.275 Reclaim Group Identifier format: 2 00:07:47.275 FDP Volatile Write Cache: Not Present 00:07:47.275 FDP Configuration: Valid 00:07:47.275 Vendor Specific Size: 0 00:07:47.275 Number of Reclaim Groups: 2 00:07:47.275 Number of Recalim Unit Handles: 8 00:07:47.275 Max Placement Identifiers: 128 00:07:47.275 Number of Namespaces Suppprted: 256 00:07:47.275 Reclaim unit Nominal Size: 6000000 bytes 00:07:47.275 Estimated Reclaim Unit Time Limit: Not Reported 00:07:47.275 RUH Desc #000: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #001: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #002: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #003: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #004: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #005: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #006: RUH Type: Initially Isolated 00:07:47.275 RUH Desc #007: RUH Type: Initially Isolated 00:07:47.275 00:07:47.275 FDP reclaim unit handle usage log page 00:07:47.275 ====================================== 00:07:47.275 Number of Reclaim Unit Handles: 8 00:07:47.275 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:47.275 RUH Usage Desc #001: RUH Attributes: Unused 00:07:47.275 RUH Usage Desc #002: RUH Attributes: Unused 00:07:47.275 RUH Usage Desc #003: RUH Attributes: Unused 00:07:47.275 RUH Usage Desc #004: RUH Attributes: Unused 00:07:47.275 RUH Usage Desc #005: RUH Attributes: Unused 00:07:47.275 RUH Usage Desc #006: RUH Attributes: Unused 00:07:47.275 RUH Usage Desc #007: RUH Attributes: Unused 00:07:47.275 00:07:47.275 FDP statistics log page 00:07:47.275 ======================= 00:07:47.275 Host bytes with metadata written: 489529344 00:07:47.275 Media bytes with metadata written: 489582592 00:07:47.275 Media bytes erased: 0 00:07:47.275 00:07:47.275 FDP events log page 00:07:47.275 =================== 00:07:47.275 Number of FDP events: 0 00:07:47.275 00:07:47.275 NVM Specific Namespace Data 00:07:47.275 =========================== 00:07:47.275 Logical Block Storage Tag Mask: 0 00:07:47.275 Protection Information Capabilities: 00:07:47.275 16b Guard Protection Information Storage Tag Support: No 00:07:47.275 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.275 Storage Tag Check Read Support: No 00:07:47.275 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.275 00:07:47.275 real 0m1.189s 00:07:47.275 user 0m0.454s 00:07:47.275 sys 0m0.526s 00:07:47.275 02:52:17 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.275 02:52:17 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:47.275 ************************************ 00:07:47.275 END TEST nvme_identify 00:07:47.275 ************************************ 00:07:47.275 02:52:17 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:47.276 02:52:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.276 02:52:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.276 02:52:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.276 ************************************ 00:07:47.276 START TEST nvme_perf 00:07:47.276 ************************************ 00:07:47.276 02:52:17 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:47.276 02:52:17 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:48.654 Initializing NVMe Controllers 00:07:48.654 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.654 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.654 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.654 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.654 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:48.654 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:48.654 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:48.654 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:48.654 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:48.654 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:48.654 Initialization complete. Launching workers. 00:07:48.654 ======================================================== 00:07:48.654 Latency(us) 00:07:48.655 Device Information : IOPS MiB/s Average min max 00:07:48.655 PCIE (0000:00:11.0) NSID 1 from core 0: 11114.94 130.25 11534.74 7103.26 34586.41 00:07:48.655 PCIE (0000:00:13.0) NSID 1 from core 0: 11114.94 130.25 11520.30 7286.16 33218.92 00:07:48.655 PCIE (0000:00:10.0) NSID 1 from core 0: 11114.94 130.25 11501.81 7354.31 31880.10 00:07:48.655 PCIE (0000:00:12.0) NSID 1 from core 0: 11114.94 130.25 11484.84 7504.13 29969.51 00:07:48.655 PCIE (0000:00:12.0) NSID 2 from core 0: 11114.94 130.25 11467.31 6845.26 29252.05 00:07:48.655 PCIE (0000:00:12.0) NSID 3 from core 0: 11178.82 131.00 11384.49 6736.70 22664.66 00:07:48.655 ======================================================== 00:07:48.655 Total : 66753.50 782.27 11482.16 6736.70 34586.41 00:07:48.655 00:07:48.655 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.655 ================================================================================= 00:07:48.655 1.00000% : 8318.031us 00:07:48.655 10.00000% : 9729.575us 00:07:48.655 25.00000% : 10132.874us 00:07:48.655 50.00000% : 10586.585us 00:07:48.655 75.00000% : 11292.357us 00:07:48.655 90.00000% : 15325.342us 00:07:48.655 95.00000% : 17039.360us 00:07:48.655 98.00000% : 18753.378us 00:07:48.655 99.00000% : 28432.542us 00:07:48.655 99.50000% : 33473.772us 00:07:48.655 99.90000% : 34482.018us 00:07:48.655 99.99000% : 34683.668us 00:07:48.655 99.99900% : 34683.668us 00:07:48.655 99.99990% : 34683.668us 00:07:48.655 99.99999% : 34683.668us 00:07:48.655 00:07:48.655 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:48.655 ================================================================================= 00:07:48.655 1.00000% : 8065.969us 00:07:48.655 10.00000% : 9729.575us 00:07:48.655 25.00000% : 10132.874us 00:07:48.655 50.00000% : 10586.585us 00:07:48.655 75.00000% : 11292.357us 00:07:48.655 90.00000% : 15526.991us 00:07:48.655 95.00000% : 16837.711us 00:07:48.655 98.00000% : 19055.852us 00:07:48.655 99.00000% : 27020.997us 00:07:48.655 99.50000% : 32062.228us 00:07:48.655 99.90000% : 33070.474us 00:07:48.655 99.99000% : 33272.123us 00:07:48.655 99.99900% : 33272.123us 00:07:48.655 99.99990% : 33272.123us 00:07:48.655 99.99999% : 33272.123us 00:07:48.655 00:07:48.655 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:48.655 ================================================================================= 00:07:48.655 1.00000% : 8267.618us 00:07:48.655 10.00000% : 9679.163us 00:07:48.655 25.00000% : 10132.874us 00:07:48.655 50.00000% : 10586.585us 00:07:48.655 75.00000% : 11342.769us 00:07:48.655 90.00000% : 15426.166us 00:07:48.655 95.00000% : 16636.062us 00:07:48.655 98.00000% : 19358.326us 00:07:48.655 99.00000% : 25105.329us 00:07:48.655 99.50000% : 30449.034us 00:07:48.655 99.90000% : 31658.929us 00:07:48.655 99.99000% : 31860.578us 00:07:48.655 99.99900% : 32062.228us 00:07:48.655 99.99990% : 32062.228us 00:07:48.655 99.99999% : 32062.228us 00:07:48.655 00:07:48.655 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.655 ================================================================================= 00:07:48.655 1.00000% : 8166.794us 00:07:48.655 10.00000% : 9729.575us 00:07:48.655 25.00000% : 10132.874us 00:07:48.655 50.00000% : 10586.585us 00:07:48.655 75.00000% : 11292.357us 00:07:48.655 90.00000% : 15526.991us 00:07:48.655 95.00000% : 16636.062us 00:07:48.655 98.00000% : 19156.677us 00:07:48.655 99.00000% : 23391.311us 00:07:48.655 99.50000% : 28835.840us 00:07:48.655 99.90000% : 29844.086us 00:07:48.655 99.99000% : 30045.735us 00:07:48.655 99.99900% : 30045.735us 00:07:48.655 99.99990% : 30045.735us 00:07:48.655 99.99999% : 30045.735us 00:07:48.655 00:07:48.655 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.655 ================================================================================= 00:07:48.655 1.00000% : 8166.794us 00:07:48.655 10.00000% : 9679.163us 00:07:48.655 25.00000% : 10132.874us 00:07:48.655 50.00000% : 10586.585us 00:07:48.655 75.00000% : 11342.769us 00:07:48.655 90.00000% : 15224.517us 00:07:48.655 95.00000% : 16736.886us 00:07:48.655 98.00000% : 19660.800us 00:07:48.655 99.00000% : 22080.591us 00:07:48.655 99.50000% : 28029.243us 00:07:48.655 99.90000% : 29037.489us 00:07:48.655 99.99000% : 29239.138us 00:07:48.655 99.99900% : 29440.788us 00:07:48.655 99.99990% : 29440.788us 00:07:48.655 99.99999% : 29440.788us 00:07:48.655 00:07:48.655 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.655 ================================================================================= 00:07:48.655 1.00000% : 8418.855us 00:07:48.655 10.00000% : 9679.163us 00:07:48.655 25.00000% : 10132.874us 00:07:48.655 50.00000% : 10586.585us 00:07:48.655 75.00000% : 11241.945us 00:07:48.655 90.00000% : 15325.342us 00:07:48.655 95.00000% : 16837.711us 00:07:48.655 98.00000% : 18249.255us 00:07:48.655 99.00000% : 20064.098us 00:07:48.655 99.50000% : 21374.818us 00:07:48.655 99.90000% : 22483.889us 00:07:48.655 99.99000% : 22685.538us 00:07:48.655 99.99900% : 22685.538us 00:07:48.655 99.99990% : 22685.538us 00:07:48.655 99.99999% : 22685.538us 00:07:48.655 00:07:48.655 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.655 ============================================================================== 00:07:48.655 Range in us Cumulative IO count 00:07:48.655 7057.723 - 7108.135: 0.0090% ( 1) 00:07:48.655 7108.135 - 7158.548: 0.0988% ( 10) 00:07:48.655 7158.548 - 7208.960: 0.1437% ( 5) 00:07:48.655 7259.372 - 7309.785: 0.1796% ( 4) 00:07:48.655 7309.785 - 7360.197: 0.2065% ( 3) 00:07:48.655 7360.197 - 7410.609: 0.2425% ( 4) 00:07:48.655 7410.609 - 7461.022: 0.2784% ( 4) 00:07:48.655 7461.022 - 7511.434: 0.3143% ( 4) 00:07:48.655 7511.434 - 7561.846: 0.3412% ( 3) 00:07:48.655 7561.846 - 7612.258: 0.3772% ( 4) 00:07:48.655 7612.258 - 7662.671: 0.4131% ( 4) 00:07:48.655 7662.671 - 7713.083: 0.4490% ( 4) 00:07:48.655 7713.083 - 7763.495: 0.4759% ( 3) 00:07:48.655 7763.495 - 7813.908: 0.5119% ( 4) 00:07:48.655 7813.908 - 7864.320: 0.5388% ( 3) 00:07:48.655 7864.320 - 7914.732: 0.6017% ( 7) 00:07:48.655 7914.732 - 7965.145: 0.6376% ( 4) 00:07:48.655 7965.145 - 8015.557: 0.6825% ( 5) 00:07:48.655 8015.557 - 8065.969: 0.7274% ( 5) 00:07:48.655 8065.969 - 8116.382: 0.7633% ( 4) 00:07:48.655 8116.382 - 8166.794: 0.8710% ( 12) 00:07:48.655 8166.794 - 8217.206: 0.8980% ( 3) 00:07:48.655 8217.206 - 8267.618: 0.9519% ( 6) 00:07:48.655 8267.618 - 8318.031: 1.0057% ( 6) 00:07:48.655 8318.031 - 8368.443: 1.0686% ( 7) 00:07:48.655 8368.443 - 8418.855: 1.1045% ( 4) 00:07:48.655 8418.855 - 8469.268: 1.1494% ( 5) 00:07:48.655 8469.268 - 8519.680: 1.2123% ( 7) 00:07:48.655 8519.680 - 8570.092: 1.2931% ( 9) 00:07:48.655 8570.092 - 8620.505: 1.3649% ( 8) 00:07:48.655 8620.505 - 8670.917: 1.4547% ( 10) 00:07:48.655 8670.917 - 8721.329: 1.5445% ( 10) 00:07:48.655 8721.329 - 8771.742: 1.6703% ( 14) 00:07:48.655 8771.742 - 8822.154: 1.7960% ( 14) 00:07:48.655 8822.154 - 8872.566: 1.9397% ( 16) 00:07:48.655 8872.566 - 8922.978: 2.0744% ( 15) 00:07:48.655 8922.978 - 8973.391: 2.2001% ( 14) 00:07:48.655 8973.391 - 9023.803: 2.4156% ( 24) 00:07:48.655 9023.803 - 9074.215: 2.7029% ( 32) 00:07:48.655 9074.215 - 9124.628: 3.0262% ( 36) 00:07:48.655 9124.628 - 9175.040: 3.4034% ( 42) 00:07:48.655 9175.040 - 9225.452: 4.0320% ( 70) 00:07:48.655 9225.452 - 9275.865: 4.3642% ( 37) 00:07:48.655 9275.865 - 9326.277: 4.7504% ( 43) 00:07:48.655 9326.277 - 9376.689: 5.0916% ( 38) 00:07:48.655 9376.689 - 9427.102: 5.5316% ( 49) 00:07:48.655 9427.102 - 9477.514: 6.0614% ( 59) 00:07:48.655 9477.514 - 9527.926: 6.8247% ( 85) 00:07:48.655 9527.926 - 9578.338: 7.6329% ( 90) 00:07:48.655 9578.338 - 9628.751: 8.5578% ( 103) 00:07:48.655 9628.751 - 9679.163: 9.6713% ( 124) 00:07:48.655 9679.163 - 9729.575: 10.9465% ( 142) 00:07:48.655 9729.575 - 9779.988: 12.6078% ( 185) 00:07:48.655 9779.988 - 9830.400: 14.1792% ( 175) 00:07:48.655 9830.400 - 9880.812: 15.8315% ( 184) 00:07:48.655 9880.812 - 9931.225: 17.7712% ( 216) 00:07:48.655 9931.225 - 9981.637: 19.8904% ( 236) 00:07:48.655 9981.637 - 10032.049: 22.0546% ( 241) 00:07:48.655 10032.049 - 10082.462: 24.4612% ( 268) 00:07:48.655 10082.462 - 10132.874: 26.7062% ( 250) 00:07:48.655 10132.874 - 10183.286: 29.1846% ( 276) 00:07:48.655 10183.286 - 10233.698: 31.7978% ( 291) 00:07:48.655 10233.698 - 10284.111: 34.4019% ( 290) 00:07:48.655 10284.111 - 10334.523: 37.5000% ( 345) 00:07:48.655 10334.523 - 10384.935: 40.6789% ( 354) 00:07:48.655 10384.935 - 10435.348: 43.6333% ( 329) 00:07:48.655 10435.348 - 10485.760: 46.5876% ( 329) 00:07:48.655 10485.760 - 10536.172: 49.3534% ( 308) 00:07:48.655 10536.172 - 10586.585: 51.8588% ( 279) 00:07:48.655 10586.585 - 10636.997: 54.4899% ( 293) 00:07:48.655 10636.997 - 10687.409: 57.1031% ( 291) 00:07:48.655 10687.409 - 10737.822: 59.4558% ( 262) 00:07:48.655 10737.822 - 10788.234: 61.6649% ( 246) 00:07:48.655 10788.234 - 10838.646: 63.7662% ( 234) 00:07:48.655 10838.646 - 10889.058: 65.7597% ( 222) 00:07:48.655 10889.058 - 10939.471: 67.5287% ( 197) 00:07:48.655 10939.471 - 10989.883: 69.0733% ( 172) 00:07:48.655 10989.883 - 11040.295: 70.4741% ( 156) 00:07:48.656 11040.295 - 11090.708: 71.8481% ( 153) 00:07:48.656 11090.708 - 11141.120: 72.9885% ( 127) 00:07:48.656 11141.120 - 11191.532: 73.9494% ( 107) 00:07:48.656 11191.532 - 11241.945: 74.7486% ( 89) 00:07:48.656 11241.945 - 11292.357: 75.3592% ( 68) 00:07:48.656 11292.357 - 11342.769: 75.8531% ( 55) 00:07:48.656 11342.769 - 11393.182: 76.2662% ( 46) 00:07:48.656 11393.182 - 11443.594: 76.6164% ( 39) 00:07:48.656 11443.594 - 11494.006: 76.9576% ( 38) 00:07:48.656 11494.006 - 11544.418: 77.2629% ( 34) 00:07:48.656 11544.418 - 11594.831: 77.4695% ( 23) 00:07:48.656 11594.831 - 11645.243: 77.6670% ( 22) 00:07:48.656 11645.243 - 11695.655: 77.8466% ( 20) 00:07:48.656 11695.655 - 11746.068: 78.0262% ( 20) 00:07:48.656 11746.068 - 11796.480: 78.1968% ( 19) 00:07:48.656 11796.480 - 11846.892: 78.3854% ( 21) 00:07:48.656 11846.892 - 11897.305: 78.5471% ( 18) 00:07:48.656 11897.305 - 11947.717: 78.6728% ( 14) 00:07:48.656 11947.717 - 11998.129: 78.8165% ( 16) 00:07:48.656 11998.129 - 12048.542: 78.9871% ( 19) 00:07:48.656 12048.542 - 12098.954: 79.1487% ( 18) 00:07:48.656 12098.954 - 12149.366: 79.3732% ( 25) 00:07:48.656 12149.366 - 12199.778: 79.5708% ( 22) 00:07:48.656 12199.778 - 12250.191: 79.7683% ( 22) 00:07:48.656 12250.191 - 12300.603: 79.9838% ( 24) 00:07:48.656 12300.603 - 12351.015: 80.2173% ( 26) 00:07:48.656 12351.015 - 12401.428: 80.4239% ( 23) 00:07:48.656 12401.428 - 12451.840: 80.6394% ( 24) 00:07:48.656 12451.840 - 12502.252: 80.8100% ( 19) 00:07:48.656 12502.252 - 12552.665: 80.9537% ( 16) 00:07:48.656 12552.665 - 12603.077: 81.0884% ( 15) 00:07:48.656 12603.077 - 12653.489: 81.2141% ( 14) 00:07:48.656 12653.489 - 12703.902: 81.3308% ( 13) 00:07:48.656 12703.902 - 12754.314: 81.5284% ( 22) 00:07:48.656 12754.314 - 12804.726: 81.6721% ( 16) 00:07:48.656 12804.726 - 12855.138: 81.8696% ( 22) 00:07:48.656 12855.138 - 12905.551: 82.0223% ( 17) 00:07:48.656 12905.551 - 13006.375: 82.2917% ( 30) 00:07:48.656 13006.375 - 13107.200: 82.6329% ( 38) 00:07:48.656 13107.200 - 13208.025: 82.8574% ( 25) 00:07:48.656 13208.025 - 13308.849: 83.1178% ( 29) 00:07:48.656 13308.849 - 13409.674: 83.3513% ( 26) 00:07:48.656 13409.674 - 13510.498: 83.5668% ( 24) 00:07:48.656 13510.498 - 13611.323: 83.8721% ( 34) 00:07:48.656 13611.323 - 13712.148: 84.2044% ( 37) 00:07:48.656 13712.148 - 13812.972: 84.5277% ( 36) 00:07:48.656 13812.972 - 13913.797: 84.8509% ( 36) 00:07:48.656 13913.797 - 14014.622: 85.2371% ( 43) 00:07:48.656 14014.622 - 14115.446: 85.6681% ( 48) 00:07:48.656 14115.446 - 14216.271: 86.1261% ( 51) 00:07:48.656 14216.271 - 14317.095: 86.5751% ( 50) 00:07:48.656 14317.095 - 14417.920: 86.8894% ( 35) 00:07:48.656 14417.920 - 14518.745: 87.2665% ( 42) 00:07:48.656 14518.745 - 14619.569: 87.6167% ( 39) 00:07:48.656 14619.569 - 14720.394: 88.2184% ( 67) 00:07:48.656 14720.394 - 14821.218: 88.6315% ( 46) 00:07:48.656 14821.218 - 14922.043: 88.9907% ( 40) 00:07:48.656 14922.043 - 15022.868: 89.3050% ( 35) 00:07:48.656 15022.868 - 15123.692: 89.6103% ( 34) 00:07:48.656 15123.692 - 15224.517: 89.8797% ( 30) 00:07:48.656 15224.517 - 15325.342: 90.1491% ( 30) 00:07:48.656 15325.342 - 15426.166: 90.4185% ( 30) 00:07:48.656 15426.166 - 15526.991: 90.6699% ( 28) 00:07:48.656 15526.991 - 15627.815: 90.9393% ( 30) 00:07:48.656 15627.815 - 15728.640: 91.2536% ( 35) 00:07:48.656 15728.640 - 15829.465: 91.6577% ( 45) 00:07:48.656 15829.465 - 15930.289: 91.9720% ( 35) 00:07:48.656 15930.289 - 16031.114: 92.2953% ( 36) 00:07:48.656 16031.114 - 16131.938: 92.5736% ( 31) 00:07:48.656 16131.938 - 16232.763: 92.8251% ( 28) 00:07:48.656 16232.763 - 16333.588: 93.0855% ( 29) 00:07:48.656 16333.588 - 16434.412: 93.5165% ( 48) 00:07:48.656 16434.412 - 16535.237: 93.9116% ( 44) 00:07:48.656 16535.237 - 16636.062: 94.2619% ( 39) 00:07:48.656 16636.062 - 16736.886: 94.5312% ( 30) 00:07:48.656 16736.886 - 16837.711: 94.7737% ( 27) 00:07:48.656 16837.711 - 16938.535: 94.9713% ( 22) 00:07:48.656 16938.535 - 17039.360: 95.1868% ( 24) 00:07:48.656 17039.360 - 17140.185: 95.4472% ( 29) 00:07:48.656 17140.185 - 17241.009: 95.5729% ( 14) 00:07:48.656 17241.009 - 17341.834: 95.6717% ( 11) 00:07:48.656 17341.834 - 17442.658: 95.7795% ( 12) 00:07:48.656 17442.658 - 17543.483: 95.8962% ( 13) 00:07:48.656 17543.483 - 17644.308: 96.0309% ( 15) 00:07:48.656 17644.308 - 17745.132: 96.2015% ( 19) 00:07:48.656 17745.132 - 17845.957: 96.4709% ( 30) 00:07:48.656 17845.957 - 17946.782: 96.6505% ( 20) 00:07:48.656 17946.782 - 18047.606: 96.8481% ( 22) 00:07:48.656 18047.606 - 18148.431: 97.0007% ( 17) 00:07:48.656 18148.431 - 18249.255: 97.1803% ( 20) 00:07:48.656 18249.255 - 18350.080: 97.3599% ( 20) 00:07:48.656 18350.080 - 18450.905: 97.6114% ( 28) 00:07:48.656 18450.905 - 18551.729: 97.7460% ( 15) 00:07:48.656 18551.729 - 18652.554: 97.8897% ( 16) 00:07:48.656 18652.554 - 18753.378: 98.0065% ( 13) 00:07:48.656 18753.378 - 18854.203: 98.0603% ( 6) 00:07:48.656 18854.203 - 18955.028: 98.1232% ( 7) 00:07:48.656 18955.028 - 19055.852: 98.1861% ( 7) 00:07:48.656 19055.852 - 19156.677: 98.2399% ( 6) 00:07:48.656 19156.677 - 19257.502: 98.2759% ( 4) 00:07:48.656 20265.748 - 20366.572: 98.2938% ( 2) 00:07:48.656 20366.572 - 20467.397: 98.3387% ( 5) 00:07:48.656 20467.397 - 20568.222: 98.3926% ( 6) 00:07:48.656 20568.222 - 20669.046: 98.4555% ( 7) 00:07:48.656 20669.046 - 20769.871: 98.5093% ( 6) 00:07:48.656 20769.871 - 20870.695: 98.5632% ( 6) 00:07:48.656 20870.695 - 20971.520: 98.6171% ( 6) 00:07:48.656 20971.520 - 21072.345: 98.6620% ( 5) 00:07:48.656 21072.345 - 21173.169: 98.7159% ( 6) 00:07:48.656 21173.169 - 21273.994: 98.7698% ( 6) 00:07:48.656 21273.994 - 21374.818: 98.8147% ( 5) 00:07:48.656 21374.818 - 21475.643: 98.8506% ( 4) 00:07:48.656 27827.594 - 28029.243: 98.9134% ( 7) 00:07:48.656 28029.243 - 28230.892: 98.9943% ( 9) 00:07:48.656 28230.892 - 28432.542: 99.0751% ( 9) 00:07:48.656 28432.542 - 28634.191: 99.1469% ( 8) 00:07:48.656 28634.191 - 28835.840: 99.2277% ( 9) 00:07:48.656 28835.840 - 29037.489: 99.3085% ( 9) 00:07:48.656 29037.489 - 29239.138: 99.3804% ( 8) 00:07:48.656 29239.138 - 29440.788: 99.4253% ( 5) 00:07:48.656 33070.474 - 33272.123: 99.4881% ( 7) 00:07:48.656 33272.123 - 33473.772: 99.5690% ( 9) 00:07:48.656 33473.772 - 33675.422: 99.6408% ( 8) 00:07:48.656 33675.422 - 33877.071: 99.7126% ( 8) 00:07:48.656 33877.071 - 34078.720: 99.7935% ( 9) 00:07:48.656 34078.720 - 34280.369: 99.8743% ( 9) 00:07:48.656 34280.369 - 34482.018: 99.9551% ( 9) 00:07:48.656 34482.018 - 34683.668: 100.0000% ( 5) 00:07:48.656 00:07:48.656 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:48.656 ============================================================================== 00:07:48.656 Range in us Cumulative IO count 00:07:48.656 7259.372 - 7309.785: 0.0180% ( 2) 00:07:48.656 7309.785 - 7360.197: 0.0898% ( 8) 00:07:48.656 7360.197 - 7410.609: 0.1437% ( 6) 00:07:48.656 7410.609 - 7461.022: 0.2155% ( 8) 00:07:48.656 7461.022 - 7511.434: 0.2874% ( 8) 00:07:48.656 7511.434 - 7561.846: 0.3502% ( 7) 00:07:48.656 7561.846 - 7612.258: 0.4131% ( 7) 00:07:48.656 7612.258 - 7662.671: 0.4849% ( 8) 00:07:48.656 7662.671 - 7713.083: 0.5478% ( 7) 00:07:48.656 7713.083 - 7763.495: 0.6106% ( 7) 00:07:48.656 7763.495 - 7813.908: 0.6825% ( 8) 00:07:48.656 7813.908 - 7864.320: 0.7543% ( 8) 00:07:48.656 7864.320 - 7914.732: 0.8172% ( 7) 00:07:48.656 7914.732 - 7965.145: 0.8800% ( 7) 00:07:48.656 7965.145 - 8015.557: 0.9519% ( 8) 00:07:48.656 8015.557 - 8065.969: 1.0057% ( 6) 00:07:48.656 8065.969 - 8116.382: 1.0686% ( 7) 00:07:48.656 8116.382 - 8166.794: 1.1135% ( 5) 00:07:48.656 8166.794 - 8217.206: 1.1404% ( 3) 00:07:48.656 8217.206 - 8267.618: 1.1494% ( 1) 00:07:48.656 8368.443 - 8418.855: 1.1764% ( 3) 00:07:48.656 8418.855 - 8469.268: 1.2123% ( 4) 00:07:48.656 8469.268 - 8519.680: 1.2572% ( 5) 00:07:48.656 8519.680 - 8570.092: 1.3560% ( 11) 00:07:48.656 8570.092 - 8620.505: 1.4817% ( 14) 00:07:48.656 8620.505 - 8670.917: 1.5625% ( 9) 00:07:48.656 8670.917 - 8721.329: 1.6074% ( 5) 00:07:48.656 8721.329 - 8771.742: 1.7062% ( 11) 00:07:48.656 8771.742 - 8822.154: 1.8050% ( 11) 00:07:48.656 8822.154 - 8872.566: 1.8858% ( 9) 00:07:48.656 8872.566 - 8922.978: 2.0205% ( 15) 00:07:48.656 8922.978 - 8973.391: 2.3258% ( 34) 00:07:48.656 8973.391 - 9023.803: 2.5593% ( 26) 00:07:48.656 9023.803 - 9074.215: 2.7299% ( 19) 00:07:48.656 9074.215 - 9124.628: 2.9544% ( 25) 00:07:48.656 9124.628 - 9175.040: 3.3046% ( 39) 00:07:48.656 9175.040 - 9225.452: 3.6907% ( 43) 00:07:48.656 9225.452 - 9275.865: 4.0050% ( 35) 00:07:48.656 9275.865 - 9326.277: 4.2565% ( 28) 00:07:48.656 9326.277 - 9376.689: 4.6516% ( 44) 00:07:48.656 9376.689 - 9427.102: 5.0198% ( 41) 00:07:48.656 9427.102 - 9477.514: 5.5136% ( 55) 00:07:48.656 9477.514 - 9527.926: 6.1602% ( 72) 00:07:48.656 9527.926 - 9578.338: 7.0582% ( 100) 00:07:48.656 9578.338 - 9628.751: 8.1268% ( 119) 00:07:48.656 9628.751 - 9679.163: 9.4019% ( 142) 00:07:48.656 9679.163 - 9729.575: 10.7220% ( 147) 00:07:48.656 9729.575 - 9779.988: 12.0690% ( 150) 00:07:48.656 9779.988 - 9830.400: 13.7931% ( 192) 00:07:48.656 9830.400 - 9880.812: 15.5981% ( 201) 00:07:48.656 9880.812 - 9931.225: 17.5377% ( 216) 00:07:48.656 9931.225 - 9981.637: 19.4774% ( 216) 00:07:48.656 9981.637 - 10032.049: 21.7942% ( 258) 00:07:48.657 10032.049 - 10082.462: 24.1290% ( 260) 00:07:48.657 10082.462 - 10132.874: 26.5894% ( 274) 00:07:48.657 10132.874 - 10183.286: 29.2654% ( 298) 00:07:48.657 10183.286 - 10233.698: 32.0043% ( 305) 00:07:48.657 10233.698 - 10284.111: 34.7342% ( 304) 00:07:48.657 10284.111 - 10334.523: 37.6257% ( 322) 00:07:48.657 10334.523 - 10384.935: 40.6250% ( 334) 00:07:48.657 10384.935 - 10435.348: 43.4986% ( 320) 00:07:48.657 10435.348 - 10485.760: 46.3272% ( 315) 00:07:48.657 10485.760 - 10536.172: 49.1020% ( 309) 00:07:48.657 10536.172 - 10586.585: 51.8499% ( 306) 00:07:48.657 10586.585 - 10636.997: 54.4810% ( 293) 00:07:48.657 10636.997 - 10687.409: 56.8427% ( 263) 00:07:48.657 10687.409 - 10737.822: 59.2672% ( 270) 00:07:48.657 10737.822 - 10788.234: 61.5392% ( 253) 00:07:48.657 10788.234 - 10838.646: 63.5866% ( 228) 00:07:48.657 10838.646 - 10889.058: 65.4813% ( 211) 00:07:48.657 10889.058 - 10939.471: 67.2144% ( 193) 00:07:48.657 10939.471 - 10989.883: 68.7949% ( 176) 00:07:48.657 10989.883 - 11040.295: 70.2586% ( 163) 00:07:48.657 11040.295 - 11090.708: 71.5158% ( 140) 00:07:48.657 11090.708 - 11141.120: 72.6203% ( 123) 00:07:48.657 11141.120 - 11191.532: 73.5812% ( 107) 00:07:48.657 11191.532 - 11241.945: 74.3714% ( 88) 00:07:48.657 11241.945 - 11292.357: 75.0180% ( 72) 00:07:48.657 11292.357 - 11342.769: 75.6017% ( 65) 00:07:48.657 11342.769 - 11393.182: 76.1225% ( 58) 00:07:48.657 11393.182 - 11443.594: 76.5984% ( 53) 00:07:48.657 11443.594 - 11494.006: 77.0474% ( 50) 00:07:48.657 11494.006 - 11544.418: 77.3886% ( 38) 00:07:48.657 11544.418 - 11594.831: 77.6401% ( 28) 00:07:48.657 11594.831 - 11645.243: 77.8287% ( 21) 00:07:48.657 11645.243 - 11695.655: 78.0262% ( 22) 00:07:48.657 11695.655 - 11746.068: 78.2328% ( 23) 00:07:48.657 11746.068 - 11796.480: 78.3764% ( 16) 00:07:48.657 11796.480 - 11846.892: 78.4662% ( 10) 00:07:48.657 11846.892 - 11897.305: 78.6189% ( 17) 00:07:48.657 11897.305 - 11947.717: 78.7626% ( 16) 00:07:48.657 11947.717 - 11998.129: 78.9062% ( 16) 00:07:48.657 11998.129 - 12048.542: 79.0589% ( 17) 00:07:48.657 12048.542 - 12098.954: 79.1936% ( 15) 00:07:48.657 12098.954 - 12149.366: 79.3642% ( 19) 00:07:48.657 12149.366 - 12199.778: 79.5079% ( 16) 00:07:48.657 12199.778 - 12250.191: 79.6426% ( 15) 00:07:48.657 12250.191 - 12300.603: 79.7863% ( 16) 00:07:48.657 12300.603 - 12351.015: 79.9479% ( 18) 00:07:48.657 12351.015 - 12401.428: 80.1006% ( 17) 00:07:48.657 12401.428 - 12451.840: 80.1904% ( 10) 00:07:48.657 12451.840 - 12502.252: 80.3161% ( 14) 00:07:48.657 12502.252 - 12552.665: 80.4059% ( 10) 00:07:48.657 12552.665 - 12603.077: 80.5316% ( 14) 00:07:48.657 12603.077 - 12653.489: 80.7292% ( 22) 00:07:48.657 12653.489 - 12703.902: 80.8998% ( 19) 00:07:48.657 12703.902 - 12754.314: 81.0255% ( 14) 00:07:48.657 12754.314 - 12804.726: 81.2051% ( 20) 00:07:48.657 12804.726 - 12855.138: 81.3578% ( 17) 00:07:48.657 12855.138 - 12905.551: 81.5284% ( 19) 00:07:48.657 12905.551 - 13006.375: 81.8337% ( 34) 00:07:48.657 13006.375 - 13107.200: 82.1480% ( 35) 00:07:48.657 13107.200 - 13208.025: 82.3815% ( 26) 00:07:48.657 13208.025 - 13308.849: 82.6868% ( 34) 00:07:48.657 13308.849 - 13409.674: 83.0999% ( 46) 00:07:48.657 13409.674 - 13510.498: 83.5399% ( 49) 00:07:48.657 13510.498 - 13611.323: 83.9170% ( 42) 00:07:48.657 13611.323 - 13712.148: 84.2313% ( 35) 00:07:48.657 13712.148 - 13812.972: 84.6085% ( 42) 00:07:48.657 13812.972 - 13913.797: 84.9677% ( 40) 00:07:48.657 13913.797 - 14014.622: 85.3179% ( 39) 00:07:48.657 14014.622 - 14115.446: 85.6681% ( 39) 00:07:48.657 14115.446 - 14216.271: 85.9734% ( 34) 00:07:48.657 14216.271 - 14317.095: 86.1710% ( 22) 00:07:48.657 14317.095 - 14417.920: 86.3685% ( 22) 00:07:48.657 14417.920 - 14518.745: 86.6559% ( 32) 00:07:48.657 14518.745 - 14619.569: 86.9971% ( 38) 00:07:48.657 14619.569 - 14720.394: 87.3922% ( 44) 00:07:48.657 14720.394 - 14821.218: 87.7335% ( 38) 00:07:48.657 14821.218 - 14922.043: 88.1017% ( 41) 00:07:48.657 14922.043 - 15022.868: 88.4698% ( 41) 00:07:48.657 15022.868 - 15123.692: 88.7662% ( 33) 00:07:48.657 15123.692 - 15224.517: 89.1074% ( 38) 00:07:48.657 15224.517 - 15325.342: 89.5115% ( 45) 00:07:48.657 15325.342 - 15426.166: 89.9874% ( 53) 00:07:48.657 15426.166 - 15526.991: 90.5172% ( 59) 00:07:48.657 15526.991 - 15627.815: 91.0201% ( 56) 00:07:48.657 15627.815 - 15728.640: 91.4781% ( 51) 00:07:48.657 15728.640 - 15829.465: 91.8463% ( 41) 00:07:48.657 15829.465 - 15930.289: 92.2234% ( 42) 00:07:48.657 15930.289 - 16031.114: 92.5467% ( 36) 00:07:48.657 16031.114 - 16131.938: 92.8879% ( 38) 00:07:48.657 16131.938 - 16232.763: 93.2830% ( 44) 00:07:48.657 16232.763 - 16333.588: 93.6243% ( 38) 00:07:48.657 16333.588 - 16434.412: 93.9835% ( 40) 00:07:48.657 16434.412 - 16535.237: 94.3068% ( 36) 00:07:48.657 16535.237 - 16636.062: 94.6839% ( 42) 00:07:48.657 16636.062 - 16736.886: 94.9982% ( 35) 00:07:48.657 16736.886 - 16837.711: 95.3305% ( 37) 00:07:48.657 16837.711 - 16938.535: 95.5999% ( 30) 00:07:48.657 16938.535 - 17039.360: 95.9501% ( 39) 00:07:48.657 17039.360 - 17140.185: 96.2015% ( 28) 00:07:48.657 17140.185 - 17241.009: 96.3991% ( 22) 00:07:48.657 17241.009 - 17341.834: 96.4799% ( 9) 00:07:48.657 17341.834 - 17442.658: 96.5427% ( 7) 00:07:48.657 17442.658 - 17543.483: 96.5787% ( 4) 00:07:48.657 17543.483 - 17644.308: 96.6146% ( 4) 00:07:48.657 17644.308 - 17745.132: 96.6774% ( 7) 00:07:48.657 17745.132 - 17845.957: 96.7403% ( 7) 00:07:48.657 17845.957 - 17946.782: 96.7942% ( 6) 00:07:48.657 17946.782 - 18047.606: 96.8570% ( 7) 00:07:48.657 18047.606 - 18148.431: 96.9109% ( 6) 00:07:48.657 18148.431 - 18249.255: 96.9738% ( 7) 00:07:48.657 18249.255 - 18350.080: 97.0905% ( 13) 00:07:48.657 18350.080 - 18450.905: 97.1893% ( 11) 00:07:48.657 18450.905 - 18551.729: 97.3689% ( 20) 00:07:48.657 18551.729 - 18652.554: 97.4856% ( 13) 00:07:48.657 18652.554 - 18753.378: 97.5934% ( 12) 00:07:48.657 18753.378 - 18854.203: 97.7101% ( 13) 00:07:48.657 18854.203 - 18955.028: 97.8807% ( 19) 00:07:48.657 18955.028 - 19055.852: 98.0334% ( 17) 00:07:48.657 19055.852 - 19156.677: 98.2040% ( 19) 00:07:48.657 19156.677 - 19257.502: 98.3567% ( 17) 00:07:48.657 19257.502 - 19358.326: 98.4734% ( 13) 00:07:48.657 19358.326 - 19459.151: 98.5722% ( 11) 00:07:48.657 19459.151 - 19559.975: 98.6261% ( 6) 00:07:48.657 19559.975 - 19660.800: 98.6710% ( 5) 00:07:48.657 19660.800 - 19761.625: 98.7159% ( 5) 00:07:48.657 19761.625 - 19862.449: 98.7698% ( 6) 00:07:48.657 19862.449 - 19963.274: 98.8147% ( 5) 00:07:48.657 19963.274 - 20064.098: 98.8506% ( 4) 00:07:48.657 26416.049 - 26617.698: 98.8865% ( 4) 00:07:48.657 26617.698 - 26819.348: 98.9583% ( 8) 00:07:48.657 26819.348 - 27020.997: 99.0392% ( 9) 00:07:48.657 27020.997 - 27222.646: 99.1200% ( 9) 00:07:48.657 27222.646 - 27424.295: 99.1918% ( 8) 00:07:48.657 27424.295 - 27625.945: 99.2636% ( 8) 00:07:48.657 27625.945 - 27827.594: 99.3445% ( 9) 00:07:48.657 27827.594 - 28029.243: 99.4253% ( 9) 00:07:48.657 31658.929 - 31860.578: 99.4792% ( 6) 00:07:48.657 31860.578 - 32062.228: 99.5510% ( 8) 00:07:48.657 32062.228 - 32263.877: 99.6318% ( 9) 00:07:48.657 32263.877 - 32465.526: 99.7037% ( 8) 00:07:48.657 32465.526 - 32667.175: 99.7755% ( 8) 00:07:48.657 32667.175 - 32868.825: 99.8563% ( 9) 00:07:48.657 32868.825 - 33070.474: 99.9371% ( 9) 00:07:48.657 33070.474 - 33272.123: 100.0000% ( 7) 00:07:48.657 00:07:48.657 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:48.657 ============================================================================== 00:07:48.657 Range in us Cumulative IO count 00:07:48.657 7309.785 - 7360.197: 0.0180% ( 2) 00:07:48.657 7360.197 - 7410.609: 0.0539% ( 4) 00:07:48.657 7410.609 - 7461.022: 0.0808% ( 3) 00:07:48.657 7461.022 - 7511.434: 0.1167% ( 4) 00:07:48.657 7511.434 - 7561.846: 0.1706% ( 6) 00:07:48.657 7561.846 - 7612.258: 0.2245% ( 6) 00:07:48.657 7612.258 - 7662.671: 0.2784% ( 6) 00:07:48.657 7662.671 - 7713.083: 0.3323% ( 6) 00:07:48.657 7713.083 - 7763.495: 0.3861% ( 6) 00:07:48.657 7763.495 - 7813.908: 0.4221% ( 4) 00:07:48.657 7813.908 - 7864.320: 0.4849% ( 7) 00:07:48.657 7864.320 - 7914.732: 0.5388% ( 6) 00:07:48.657 7914.732 - 7965.145: 0.5927% ( 6) 00:07:48.657 7965.145 - 8015.557: 0.6286% ( 4) 00:07:48.657 8015.557 - 8065.969: 0.6915% ( 7) 00:07:48.657 8065.969 - 8116.382: 0.7633% ( 8) 00:07:48.657 8116.382 - 8166.794: 0.8531% ( 10) 00:07:48.657 8166.794 - 8217.206: 0.9698% ( 13) 00:07:48.657 8217.206 - 8267.618: 1.0686% ( 11) 00:07:48.657 8267.618 - 8318.031: 1.1943% ( 14) 00:07:48.657 8318.031 - 8368.443: 1.2931% ( 11) 00:07:48.657 8368.443 - 8418.855: 1.3919% ( 11) 00:07:48.657 8418.855 - 8469.268: 1.4907% ( 11) 00:07:48.657 8469.268 - 8519.680: 1.5715% ( 9) 00:07:48.657 8519.680 - 8570.092: 1.6523% ( 9) 00:07:48.657 8570.092 - 8620.505: 1.7421% ( 10) 00:07:48.657 8620.505 - 8670.917: 1.8409% ( 11) 00:07:48.657 8670.917 - 8721.329: 1.9127% ( 8) 00:07:48.657 8721.329 - 8771.742: 1.9935% ( 9) 00:07:48.657 8771.742 - 8822.154: 2.1372% ( 16) 00:07:48.657 8822.154 - 8872.566: 2.3527% ( 24) 00:07:48.657 8872.566 - 8922.978: 2.5952% ( 27) 00:07:48.657 8922.978 - 8973.391: 2.8556% ( 29) 00:07:48.657 8973.391 - 9023.803: 3.0352% ( 20) 00:07:48.657 9023.803 - 9074.215: 3.2866% ( 28) 00:07:48.657 9074.215 - 9124.628: 3.4932% ( 23) 00:07:48.657 9124.628 - 9175.040: 3.8524% ( 40) 00:07:48.657 9175.040 - 9225.452: 4.1397% ( 32) 00:07:48.657 9225.452 - 9275.865: 4.5169% ( 42) 00:07:48.657 9275.865 - 9326.277: 4.9569% ( 49) 00:07:48.657 9326.277 - 9376.689: 5.3700% ( 46) 00:07:48.658 9376.689 - 9427.102: 6.0075% ( 71) 00:07:48.658 9427.102 - 9477.514: 6.4565% ( 50) 00:07:48.658 9477.514 - 9527.926: 7.2737% ( 91) 00:07:48.658 9527.926 - 9578.338: 8.0550% ( 87) 00:07:48.658 9578.338 - 9628.751: 8.8721% ( 91) 00:07:48.658 9628.751 - 9679.163: 10.1652% ( 144) 00:07:48.658 9679.163 - 9729.575: 11.4853% ( 147) 00:07:48.658 9729.575 - 9779.988: 12.8951% ( 157) 00:07:48.658 9779.988 - 9830.400: 14.4127% ( 169) 00:07:48.658 9830.400 - 9880.812: 16.0920% ( 187) 00:07:48.658 9880.812 - 9931.225: 18.0945% ( 223) 00:07:48.658 9931.225 - 9981.637: 19.8366% ( 194) 00:07:48.658 9981.637 - 10032.049: 21.8660% ( 226) 00:07:48.658 10032.049 - 10082.462: 24.0392% ( 242) 00:07:48.658 10082.462 - 10132.874: 26.4188% ( 265) 00:07:48.658 10132.874 - 10183.286: 28.8075% ( 266) 00:07:48.658 10183.286 - 10233.698: 31.5374% ( 304) 00:07:48.658 10233.698 - 10284.111: 34.1685% ( 293) 00:07:48.658 10284.111 - 10334.523: 37.0510% ( 321) 00:07:48.658 10334.523 - 10384.935: 39.8617% ( 313) 00:07:48.658 10384.935 - 10435.348: 42.5647% ( 301) 00:07:48.658 10435.348 - 10485.760: 45.5729% ( 335) 00:07:48.658 10485.760 - 10536.172: 48.4195% ( 317) 00:07:48.658 10536.172 - 10586.585: 51.0596% ( 294) 00:07:48.658 10586.585 - 10636.997: 53.6997% ( 294) 00:07:48.658 10636.997 - 10687.409: 56.1153% ( 269) 00:07:48.658 10687.409 - 10737.822: 58.4950% ( 265) 00:07:48.658 10737.822 - 10788.234: 60.8028% ( 257) 00:07:48.658 10788.234 - 10838.646: 62.7963% ( 222) 00:07:48.658 10838.646 - 10889.058: 64.6731% ( 209) 00:07:48.658 10889.058 - 10939.471: 66.2536% ( 176) 00:07:48.658 10939.471 - 10989.883: 68.0675% ( 202) 00:07:48.658 10989.883 - 11040.295: 69.5492% ( 165) 00:07:48.658 11040.295 - 11090.708: 70.7974% ( 139) 00:07:48.658 11090.708 - 11141.120: 72.0905% ( 144) 00:07:48.658 11141.120 - 11191.532: 73.2399% ( 128) 00:07:48.658 11191.532 - 11241.945: 74.1559% ( 102) 00:07:48.658 11241.945 - 11292.357: 74.9731% ( 91) 00:07:48.658 11292.357 - 11342.769: 75.6825% ( 79) 00:07:48.658 11342.769 - 11393.182: 76.2213% ( 60) 00:07:48.658 11393.182 - 11443.594: 76.6433% ( 47) 00:07:48.658 11443.594 - 11494.006: 76.9666% ( 36) 00:07:48.658 11494.006 - 11544.418: 77.2450% ( 31) 00:07:48.658 11544.418 - 11594.831: 77.5233% ( 31) 00:07:48.658 11594.831 - 11645.243: 77.7748% ( 28) 00:07:48.658 11645.243 - 11695.655: 77.9095% ( 15) 00:07:48.658 11695.655 - 11746.068: 78.1070% ( 22) 00:07:48.658 11746.068 - 11796.480: 78.4393% ( 37) 00:07:48.658 11796.480 - 11846.892: 78.6458% ( 23) 00:07:48.658 11846.892 - 11897.305: 78.7895% ( 16) 00:07:48.658 11897.305 - 11947.717: 78.8973% ( 12) 00:07:48.658 11947.717 - 11998.129: 79.1128% ( 24) 00:07:48.658 11998.129 - 12048.542: 79.2744% ( 18) 00:07:48.658 12048.542 - 12098.954: 79.4361% ( 18) 00:07:48.658 12098.954 - 12149.366: 79.5708% ( 15) 00:07:48.658 12149.366 - 12199.778: 79.7055% ( 15) 00:07:48.658 12199.778 - 12250.191: 79.7953% ( 10) 00:07:48.658 12250.191 - 12300.603: 79.8851% ( 10) 00:07:48.658 12300.603 - 12351.015: 80.0198% ( 15) 00:07:48.658 12351.015 - 12401.428: 80.1814% ( 18) 00:07:48.658 12401.428 - 12451.840: 80.2981% ( 13) 00:07:48.658 12451.840 - 12502.252: 80.4059% ( 12) 00:07:48.658 12502.252 - 12552.665: 80.5945% ( 21) 00:07:48.658 12552.665 - 12603.077: 80.7471% ( 17) 00:07:48.658 12603.077 - 12653.489: 80.9267% ( 20) 00:07:48.658 12653.489 - 12703.902: 81.0345% ( 12) 00:07:48.658 12703.902 - 12754.314: 81.1871% ( 17) 00:07:48.658 12754.314 - 12804.726: 81.3667% ( 20) 00:07:48.658 12804.726 - 12855.138: 81.4476% ( 9) 00:07:48.658 12855.138 - 12905.551: 81.5823% ( 15) 00:07:48.658 12905.551 - 13006.375: 81.7888% ( 23) 00:07:48.658 13006.375 - 13107.200: 82.0941% ( 34) 00:07:48.658 13107.200 - 13208.025: 82.3725% ( 31) 00:07:48.658 13208.025 - 13308.849: 82.6419% ( 30) 00:07:48.658 13308.849 - 13409.674: 82.9382% ( 33) 00:07:48.658 13409.674 - 13510.498: 83.2166% ( 31) 00:07:48.658 13510.498 - 13611.323: 83.5578% ( 38) 00:07:48.658 13611.323 - 13712.148: 83.7913% ( 26) 00:07:48.658 13712.148 - 13812.972: 84.0517% ( 29) 00:07:48.658 13812.972 - 13913.797: 84.3481% ( 33) 00:07:48.658 13913.797 - 14014.622: 84.6893% ( 38) 00:07:48.658 14014.622 - 14115.446: 85.0216% ( 37) 00:07:48.658 14115.446 - 14216.271: 85.4077% ( 43) 00:07:48.658 14216.271 - 14317.095: 85.8118% ( 45) 00:07:48.658 14317.095 - 14417.920: 86.1440% ( 37) 00:07:48.658 14417.920 - 14518.745: 86.5032% ( 40) 00:07:48.658 14518.745 - 14619.569: 86.7726% ( 30) 00:07:48.658 14619.569 - 14720.394: 87.1139% ( 38) 00:07:48.658 14720.394 - 14821.218: 87.5629% ( 50) 00:07:48.658 14821.218 - 14922.043: 87.8772% ( 35) 00:07:48.658 14922.043 - 15022.868: 88.2812% ( 45) 00:07:48.658 15022.868 - 15123.692: 88.6584% ( 42) 00:07:48.658 15123.692 - 15224.517: 89.2690% ( 68) 00:07:48.658 15224.517 - 15325.342: 89.7001% ( 48) 00:07:48.658 15325.342 - 15426.166: 90.0952% ( 44) 00:07:48.658 15426.166 - 15526.991: 90.4723% ( 42) 00:07:48.658 15526.991 - 15627.815: 91.0381% ( 63) 00:07:48.658 15627.815 - 15728.640: 91.5050% ( 52) 00:07:48.658 15728.640 - 15829.465: 91.9361% ( 48) 00:07:48.658 15829.465 - 15930.289: 92.4210% ( 54) 00:07:48.658 15930.289 - 16031.114: 92.7981% ( 42) 00:07:48.658 16031.114 - 16131.938: 93.3459% ( 61) 00:07:48.658 16131.938 - 16232.763: 93.7141% ( 41) 00:07:48.658 16232.763 - 16333.588: 94.1182% ( 45) 00:07:48.658 16333.588 - 16434.412: 94.4504% ( 37) 00:07:48.658 16434.412 - 16535.237: 94.8545% ( 45) 00:07:48.658 16535.237 - 16636.062: 95.2496% ( 44) 00:07:48.658 16636.062 - 16736.886: 95.6178% ( 41) 00:07:48.658 16736.886 - 16837.711: 95.8962% ( 31) 00:07:48.658 16837.711 - 16938.535: 96.1566% ( 29) 00:07:48.658 16938.535 - 17039.360: 96.4350% ( 31) 00:07:48.658 17039.360 - 17140.185: 96.6415% ( 23) 00:07:48.658 17140.185 - 17241.009: 96.8481% ( 23) 00:07:48.658 17241.009 - 17341.834: 96.9468% ( 11) 00:07:48.658 17341.834 - 17442.658: 96.9738% ( 3) 00:07:48.658 17442.658 - 17543.483: 96.9828% ( 1) 00:07:48.658 17543.483 - 17644.308: 97.0097% ( 3) 00:07:48.658 17644.308 - 17745.132: 97.0546% ( 5) 00:07:48.658 17845.957 - 17946.782: 97.0995% ( 5) 00:07:48.658 17946.782 - 18047.606: 97.1264% ( 3) 00:07:48.658 18450.905 - 18551.729: 97.1624% ( 4) 00:07:48.658 18551.729 - 18652.554: 97.3240% ( 18) 00:07:48.658 18652.554 - 18753.378: 97.4048% ( 9) 00:07:48.658 18753.378 - 18854.203: 97.4587% ( 6) 00:07:48.658 18854.203 - 18955.028: 97.5575% ( 11) 00:07:48.658 18955.028 - 19055.852: 97.6562% ( 11) 00:07:48.658 19055.852 - 19156.677: 97.7640% ( 12) 00:07:48.658 19156.677 - 19257.502: 97.8897% ( 14) 00:07:48.658 19257.502 - 19358.326: 98.0424% ( 17) 00:07:48.658 19358.326 - 19459.151: 98.1591% ( 13) 00:07:48.658 19459.151 - 19559.975: 98.2759% ( 13) 00:07:48.658 19559.975 - 19660.800: 98.4465% ( 19) 00:07:48.658 19660.800 - 19761.625: 98.5183% ( 8) 00:07:48.658 19761.625 - 19862.449: 98.6081% ( 10) 00:07:48.658 19862.449 - 19963.274: 98.6710% ( 7) 00:07:48.658 19963.274 - 20064.098: 98.7069% ( 4) 00:07:48.658 20064.098 - 20164.923: 98.7518% ( 5) 00:07:48.658 20164.923 - 20265.748: 98.7877% ( 4) 00:07:48.658 20265.748 - 20366.572: 98.8506% ( 7) 00:07:48.658 24601.206 - 24702.031: 98.8685% ( 2) 00:07:48.658 24702.031 - 24802.855: 98.9045% ( 4) 00:07:48.658 24802.855 - 24903.680: 98.9404% ( 4) 00:07:48.658 24903.680 - 25004.505: 98.9763% ( 4) 00:07:48.658 25004.505 - 25105.329: 99.0032% ( 3) 00:07:48.658 25105.329 - 25206.154: 99.0302% ( 3) 00:07:48.658 25206.154 - 25306.978: 99.0661% ( 4) 00:07:48.658 25306.978 - 25407.803: 99.0930% ( 3) 00:07:48.658 25407.803 - 25508.628: 99.1290% ( 4) 00:07:48.658 25508.628 - 25609.452: 99.1828% ( 6) 00:07:48.658 25609.452 - 25710.277: 99.2008% ( 2) 00:07:48.658 25710.277 - 25811.102: 99.2367% ( 4) 00:07:48.658 25811.102 - 26012.751: 99.3085% ( 8) 00:07:48.658 26012.751 - 26214.400: 99.3714% ( 7) 00:07:48.658 26214.400 - 26416.049: 99.4253% ( 6) 00:07:48.658 30045.735 - 30247.385: 99.4522% ( 3) 00:07:48.658 30247.385 - 30449.034: 99.5061% ( 6) 00:07:48.658 30449.034 - 30650.683: 99.5779% ( 8) 00:07:48.658 30650.683 - 30852.332: 99.6408% ( 7) 00:07:48.658 30852.332 - 31053.982: 99.7126% ( 8) 00:07:48.658 31053.982 - 31255.631: 99.7845% ( 8) 00:07:48.658 31255.631 - 31457.280: 99.8563% ( 8) 00:07:48.658 31457.280 - 31658.929: 99.9192% ( 7) 00:07:48.658 31658.929 - 31860.578: 99.9910% ( 8) 00:07:48.658 31860.578 - 32062.228: 100.0000% ( 1) 00:07:48.658 00:07:48.658 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.658 ============================================================================== 00:07:48.658 Range in us Cumulative IO count 00:07:48.658 7461.022 - 7511.434: 0.0090% ( 1) 00:07:48.658 7511.434 - 7561.846: 0.0718% ( 7) 00:07:48.658 7561.846 - 7612.258: 0.1078% ( 4) 00:07:48.658 7612.258 - 7662.671: 0.1257% ( 2) 00:07:48.658 7662.671 - 7713.083: 0.1437% ( 2) 00:07:48.658 7713.083 - 7763.495: 0.1796% ( 4) 00:07:48.658 7763.495 - 7813.908: 0.2245% ( 5) 00:07:48.658 7813.908 - 7864.320: 0.2784% ( 6) 00:07:48.659 7864.320 - 7914.732: 0.4041% ( 14) 00:07:48.659 7914.732 - 7965.145: 0.5388% ( 15) 00:07:48.659 7965.145 - 8015.557: 0.6286% ( 10) 00:07:48.659 8015.557 - 8065.969: 0.7992% ( 19) 00:07:48.659 8065.969 - 8116.382: 0.9070% ( 12) 00:07:48.659 8116.382 - 8166.794: 1.0596% ( 17) 00:07:48.659 8166.794 - 8217.206: 1.2033% ( 16) 00:07:48.659 8217.206 - 8267.618: 1.3200% ( 13) 00:07:48.659 8267.618 - 8318.031: 1.4368% ( 13) 00:07:48.659 8318.031 - 8368.443: 1.5356% ( 11) 00:07:48.659 8368.443 - 8418.855: 1.6343% ( 11) 00:07:48.659 8418.855 - 8469.268: 1.7870% ( 17) 00:07:48.659 8469.268 - 8519.680: 1.9127% ( 14) 00:07:48.659 8519.680 - 8570.092: 2.0295% ( 13) 00:07:48.659 8570.092 - 8620.505: 2.1552% ( 14) 00:07:48.659 8620.505 - 8670.917: 2.2629% ( 12) 00:07:48.659 8670.917 - 8721.329: 2.3976% ( 15) 00:07:48.659 8721.329 - 8771.742: 2.4695% ( 8) 00:07:48.659 8771.742 - 8822.154: 2.5593% ( 10) 00:07:48.659 8822.154 - 8872.566: 2.6940% ( 15) 00:07:48.659 8872.566 - 8922.978: 2.8376% ( 16) 00:07:48.659 8922.978 - 8973.391: 2.9544% ( 13) 00:07:48.659 8973.391 - 9023.803: 3.1160% ( 18) 00:07:48.659 9023.803 - 9074.215: 3.3315% ( 24) 00:07:48.659 9074.215 - 9124.628: 3.5381% ( 23) 00:07:48.659 9124.628 - 9175.040: 3.8165% ( 31) 00:07:48.659 9175.040 - 9225.452: 4.1397% ( 36) 00:07:48.659 9225.452 - 9275.865: 4.3732% ( 26) 00:07:48.659 9275.865 - 9326.277: 4.6067% ( 26) 00:07:48.659 9326.277 - 9376.689: 4.9389% ( 37) 00:07:48.659 9376.689 - 9427.102: 5.4059% ( 52) 00:07:48.659 9427.102 - 9477.514: 5.8818% ( 53) 00:07:48.659 9477.514 - 9527.926: 6.5014% ( 69) 00:07:48.659 9527.926 - 9578.338: 7.3096% ( 90) 00:07:48.659 9578.338 - 9628.751: 8.2256% ( 102) 00:07:48.659 9628.751 - 9679.163: 9.1595% ( 104) 00:07:48.659 9679.163 - 9729.575: 10.1203% ( 107) 00:07:48.659 9729.575 - 9779.988: 11.4583% ( 149) 00:07:48.659 9779.988 - 9830.400: 12.9041% ( 161) 00:07:48.659 9830.400 - 9880.812: 14.6642% ( 196) 00:07:48.659 9880.812 - 9931.225: 16.5499% ( 210) 00:07:48.659 9931.225 - 9981.637: 18.5345% ( 221) 00:07:48.659 9981.637 - 10032.049: 20.7705% ( 249) 00:07:48.659 10032.049 - 10082.462: 23.1771% ( 268) 00:07:48.659 10082.462 - 10132.874: 25.7812% ( 290) 00:07:48.659 10132.874 - 10183.286: 28.3405% ( 285) 00:07:48.659 10183.286 - 10233.698: 30.9357% ( 289) 00:07:48.659 10233.698 - 10284.111: 33.8093% ( 320) 00:07:48.659 10284.111 - 10334.523: 36.6918% ( 321) 00:07:48.659 10334.523 - 10384.935: 39.8078% ( 347) 00:07:48.659 10384.935 - 10435.348: 42.6724% ( 319) 00:07:48.659 10435.348 - 10485.760: 45.4741% ( 312) 00:07:48.659 10485.760 - 10536.172: 48.2848% ( 313) 00:07:48.659 10536.172 - 10586.585: 51.1674% ( 321) 00:07:48.659 10586.585 - 10636.997: 54.0050% ( 316) 00:07:48.659 10636.997 - 10687.409: 56.9055% ( 323) 00:07:48.659 10687.409 - 10737.822: 59.5815% ( 298) 00:07:48.659 10737.822 - 10788.234: 62.0510% ( 275) 00:07:48.659 10788.234 - 10838.646: 64.1523% ( 234) 00:07:48.659 10838.646 - 10889.058: 66.0201% ( 208) 00:07:48.659 10889.058 - 10939.471: 67.6275% ( 179) 00:07:48.659 10939.471 - 10989.883: 69.1900% ( 174) 00:07:48.659 10989.883 - 11040.295: 70.6537% ( 163) 00:07:48.659 11040.295 - 11090.708: 71.8211% ( 130) 00:07:48.659 11090.708 - 11141.120: 72.8358% ( 113) 00:07:48.659 11141.120 - 11191.532: 73.7338% ( 100) 00:07:48.659 11191.532 - 11241.945: 74.5330% ( 89) 00:07:48.659 11241.945 - 11292.357: 75.2425% ( 79) 00:07:48.659 11292.357 - 11342.769: 75.8980% ( 73) 00:07:48.659 11342.769 - 11393.182: 76.4458% ( 61) 00:07:48.659 11393.182 - 11443.594: 76.8499% ( 45) 00:07:48.659 11443.594 - 11494.006: 77.1642% ( 35) 00:07:48.659 11494.006 - 11544.418: 77.3976% ( 26) 00:07:48.659 11544.418 - 11594.831: 77.5593% ( 18) 00:07:48.659 11594.831 - 11645.243: 77.6940% ( 15) 00:07:48.659 11645.243 - 11695.655: 77.9095% ( 24) 00:07:48.659 11695.655 - 11746.068: 78.0532% ( 16) 00:07:48.659 11746.068 - 11796.480: 78.1519% ( 11) 00:07:48.659 11796.480 - 11846.892: 78.2328% ( 9) 00:07:48.659 11846.892 - 11897.305: 78.3136% ( 9) 00:07:48.659 11897.305 - 11947.717: 78.4034% ( 10) 00:07:48.659 11947.717 - 11998.129: 78.4932% ( 10) 00:07:48.659 11998.129 - 12048.542: 78.6099% ( 13) 00:07:48.659 12048.542 - 12098.954: 78.7626% ( 17) 00:07:48.659 12098.954 - 12149.366: 78.8973% ( 15) 00:07:48.659 12149.366 - 12199.778: 79.0769% ( 20) 00:07:48.659 12199.778 - 12250.191: 79.2565% ( 20) 00:07:48.659 12250.191 - 12300.603: 79.4181% ( 18) 00:07:48.659 12300.603 - 12351.015: 79.5887% ( 19) 00:07:48.659 12351.015 - 12401.428: 79.7773% ( 21) 00:07:48.659 12401.428 - 12451.840: 79.9569% ( 20) 00:07:48.659 12451.840 - 12502.252: 80.1096% ( 17) 00:07:48.659 12502.252 - 12552.665: 80.2622% ( 17) 00:07:48.659 12552.665 - 12603.077: 80.4328% ( 19) 00:07:48.659 12603.077 - 12653.489: 80.6034% ( 19) 00:07:48.659 12653.489 - 12703.902: 80.7651% ( 18) 00:07:48.659 12703.902 - 12754.314: 80.8998% ( 15) 00:07:48.659 12754.314 - 12804.726: 81.0255% ( 14) 00:07:48.659 12804.726 - 12855.138: 81.1333% ( 12) 00:07:48.659 12855.138 - 12905.551: 81.2590% ( 14) 00:07:48.659 12905.551 - 13006.375: 81.5643% ( 34) 00:07:48.659 13006.375 - 13107.200: 81.8786% ( 35) 00:07:48.659 13107.200 - 13208.025: 82.1839% ( 34) 00:07:48.659 13208.025 - 13308.849: 82.5251% ( 38) 00:07:48.659 13308.849 - 13409.674: 82.7856% ( 29) 00:07:48.659 13409.674 - 13510.498: 83.0550% ( 30) 00:07:48.659 13510.498 - 13611.323: 83.3154% ( 29) 00:07:48.659 13611.323 - 13712.148: 83.6476% ( 37) 00:07:48.659 13712.148 - 13812.972: 83.9978% ( 39) 00:07:48.659 13812.972 - 13913.797: 84.3660% ( 41) 00:07:48.659 13913.797 - 14014.622: 84.7522% ( 43) 00:07:48.659 14014.622 - 14115.446: 85.1473% ( 44) 00:07:48.659 14115.446 - 14216.271: 85.4526% ( 34) 00:07:48.659 14216.271 - 14317.095: 85.7130% ( 29) 00:07:48.659 14317.095 - 14417.920: 86.0183% ( 34) 00:07:48.659 14417.920 - 14518.745: 86.3775% ( 40) 00:07:48.659 14518.745 - 14619.569: 86.6828% ( 34) 00:07:48.659 14619.569 - 14720.394: 87.0330% ( 39) 00:07:48.659 14720.394 - 14821.218: 87.4192% ( 43) 00:07:48.659 14821.218 - 14922.043: 87.7874% ( 41) 00:07:48.659 14922.043 - 15022.868: 88.1286% ( 38) 00:07:48.659 15022.868 - 15123.692: 88.4608% ( 37) 00:07:48.659 15123.692 - 15224.517: 88.8649% ( 45) 00:07:48.659 15224.517 - 15325.342: 89.3319% ( 52) 00:07:48.659 15325.342 - 15426.166: 89.8168% ( 54) 00:07:48.659 15426.166 - 15526.991: 90.3107% ( 55) 00:07:48.659 15526.991 - 15627.815: 90.7866% ( 53) 00:07:48.659 15627.815 - 15728.640: 91.2716% ( 54) 00:07:48.659 15728.640 - 15829.465: 91.7924% ( 58) 00:07:48.659 15829.465 - 15930.289: 92.2504% ( 51) 00:07:48.659 15930.289 - 16031.114: 92.7353% ( 54) 00:07:48.659 16031.114 - 16131.938: 93.2112% ( 53) 00:07:48.659 16131.938 - 16232.763: 93.6782% ( 52) 00:07:48.659 16232.763 - 16333.588: 94.1541% ( 53) 00:07:48.659 16333.588 - 16434.412: 94.6210% ( 52) 00:07:48.659 16434.412 - 16535.237: 94.9533% ( 37) 00:07:48.659 16535.237 - 16636.062: 95.3125% ( 40) 00:07:48.659 16636.062 - 16736.886: 95.6627% ( 39) 00:07:48.659 16736.886 - 16837.711: 96.0129% ( 39) 00:07:48.659 16837.711 - 16938.535: 96.2733% ( 29) 00:07:48.659 16938.535 - 17039.360: 96.5068% ( 26) 00:07:48.659 17039.360 - 17140.185: 96.7313% ( 25) 00:07:48.659 17140.185 - 17241.009: 96.8840% ( 17) 00:07:48.659 17241.009 - 17341.834: 96.9917% ( 12) 00:07:48.659 17341.834 - 17442.658: 97.0546% ( 7) 00:07:48.659 17442.658 - 17543.483: 97.1085% ( 6) 00:07:48.659 17543.483 - 17644.308: 97.1264% ( 2) 00:07:48.659 17946.782 - 18047.606: 97.1444% ( 2) 00:07:48.659 18047.606 - 18148.431: 97.1983% ( 6) 00:07:48.659 18148.431 - 18249.255: 97.2522% ( 6) 00:07:48.659 18249.255 - 18350.080: 97.2971% ( 5) 00:07:48.659 18350.080 - 18450.905: 97.3420% ( 5) 00:07:48.659 18450.905 - 18551.729: 97.4318% ( 10) 00:07:48.660 18551.729 - 18652.554: 97.5395% ( 12) 00:07:48.660 18652.554 - 18753.378: 97.6383% ( 11) 00:07:48.660 18753.378 - 18854.203: 97.7460% ( 12) 00:07:48.660 18854.203 - 18955.028: 97.8538% ( 12) 00:07:48.660 18955.028 - 19055.852: 97.9705% ( 13) 00:07:48.660 19055.852 - 19156.677: 98.1322% ( 18) 00:07:48.660 19156.677 - 19257.502: 98.2489% ( 13) 00:07:48.660 19257.502 - 19358.326: 98.3567% ( 12) 00:07:48.660 19358.326 - 19459.151: 98.4555% ( 11) 00:07:48.660 19459.151 - 19559.975: 98.5453% ( 10) 00:07:48.660 19559.975 - 19660.800: 98.5991% ( 6) 00:07:48.660 19660.800 - 19761.625: 98.6620% ( 7) 00:07:48.660 19761.625 - 19862.449: 98.7159% ( 6) 00:07:48.660 19862.449 - 19963.274: 98.7698% ( 6) 00:07:48.660 19963.274 - 20064.098: 98.8236% ( 6) 00:07:48.660 20064.098 - 20164.923: 98.8506% ( 3) 00:07:48.660 22887.188 - 22988.012: 98.8685% ( 2) 00:07:48.660 22988.012 - 23088.837: 98.9045% ( 4) 00:07:48.660 23088.837 - 23189.662: 98.9314% ( 3) 00:07:48.660 23189.662 - 23290.486: 98.9763% ( 5) 00:07:48.660 23290.486 - 23391.311: 99.0032% ( 3) 00:07:48.660 23391.311 - 23492.135: 99.0481% ( 5) 00:07:48.660 23492.135 - 23592.960: 99.0841% ( 4) 00:07:48.660 23592.960 - 23693.785: 99.1200% ( 4) 00:07:48.660 23693.785 - 23794.609: 99.1559% ( 4) 00:07:48.660 23794.609 - 23895.434: 99.2008% ( 5) 00:07:48.660 23895.434 - 23996.258: 99.2367% ( 4) 00:07:48.660 23996.258 - 24097.083: 99.2726% ( 4) 00:07:48.660 24097.083 - 24197.908: 99.3085% ( 4) 00:07:48.660 24197.908 - 24298.732: 99.3534% ( 5) 00:07:48.660 24298.732 - 24399.557: 99.3894% ( 4) 00:07:48.660 24399.557 - 24500.382: 99.4253% ( 4) 00:07:48.660 28432.542 - 28634.191: 99.4971% ( 8) 00:07:48.660 28634.191 - 28835.840: 99.5690% ( 8) 00:07:48.660 28835.840 - 29037.489: 99.6498% ( 9) 00:07:48.660 29037.489 - 29239.138: 99.7216% ( 8) 00:07:48.660 29239.138 - 29440.788: 99.8024% ( 9) 00:07:48.660 29440.788 - 29642.437: 99.8743% ( 8) 00:07:48.660 29642.437 - 29844.086: 99.9461% ( 8) 00:07:48.660 29844.086 - 30045.735: 100.0000% ( 6) 00:07:48.660 00:07:48.660 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.660 ============================================================================== 00:07:48.660 Range in us Cumulative IO count 00:07:48.660 6805.662 - 6856.074: 0.0180% ( 2) 00:07:48.660 6856.074 - 6906.486: 0.0449% ( 3) 00:07:48.660 6906.486 - 6956.898: 0.0988% ( 6) 00:07:48.660 6956.898 - 7007.311: 0.1257% ( 3) 00:07:48.660 7007.311 - 7057.723: 0.1527% ( 3) 00:07:48.660 7057.723 - 7108.135: 0.1886% ( 4) 00:07:48.660 7108.135 - 7158.548: 0.2335% ( 5) 00:07:48.660 7158.548 - 7208.960: 0.2604% ( 3) 00:07:48.660 7208.960 - 7259.372: 0.2874% ( 3) 00:07:48.660 7259.372 - 7309.785: 0.3233% ( 4) 00:07:48.660 7309.785 - 7360.197: 0.3502% ( 3) 00:07:48.660 7360.197 - 7410.609: 0.3772% ( 3) 00:07:48.660 7410.609 - 7461.022: 0.4041% ( 3) 00:07:48.660 7461.022 - 7511.434: 0.4400% ( 4) 00:07:48.660 7511.434 - 7561.846: 0.4759% ( 4) 00:07:48.660 7561.846 - 7612.258: 0.5029% ( 3) 00:07:48.660 7612.258 - 7662.671: 0.5388% ( 4) 00:07:48.660 7662.671 - 7713.083: 0.5657% ( 3) 00:07:48.660 7713.083 - 7763.495: 0.5747% ( 1) 00:07:48.660 7864.320 - 7914.732: 0.5927% ( 2) 00:07:48.660 7914.732 - 7965.145: 0.6376% ( 5) 00:07:48.660 7965.145 - 8015.557: 0.6735% ( 4) 00:07:48.660 8015.557 - 8065.969: 0.7094% ( 4) 00:07:48.660 8065.969 - 8116.382: 0.9070% ( 22) 00:07:48.660 8116.382 - 8166.794: 1.0057% ( 11) 00:07:48.660 8166.794 - 8217.206: 1.0866% ( 9) 00:07:48.660 8217.206 - 8267.618: 1.1943% ( 12) 00:07:48.660 8267.618 - 8318.031: 1.3380% ( 16) 00:07:48.660 8318.031 - 8368.443: 1.4547% ( 13) 00:07:48.660 8368.443 - 8418.855: 1.5984% ( 16) 00:07:48.660 8418.855 - 8469.268: 1.7331% ( 15) 00:07:48.660 8469.268 - 8519.680: 1.8768% ( 16) 00:07:48.660 8519.680 - 8570.092: 2.0115% ( 15) 00:07:48.660 8570.092 - 8620.505: 2.1462% ( 15) 00:07:48.660 8620.505 - 8670.917: 2.2989% ( 17) 00:07:48.660 8670.917 - 8721.329: 2.4695% ( 19) 00:07:48.660 8721.329 - 8771.742: 2.6580% ( 21) 00:07:48.660 8771.742 - 8822.154: 2.7838% ( 14) 00:07:48.660 8822.154 - 8872.566: 2.9454% ( 18) 00:07:48.660 8872.566 - 8922.978: 3.0981% ( 17) 00:07:48.660 8922.978 - 8973.391: 3.2148% ( 13) 00:07:48.660 8973.391 - 9023.803: 3.3585% ( 16) 00:07:48.660 9023.803 - 9074.215: 3.5471% ( 21) 00:07:48.660 9074.215 - 9124.628: 3.7716% ( 25) 00:07:48.660 9124.628 - 9175.040: 3.9781% ( 23) 00:07:48.660 9175.040 - 9225.452: 4.3193% ( 38) 00:07:48.660 9225.452 - 9275.865: 4.7683% ( 50) 00:07:48.660 9275.865 - 9326.277: 5.2353% ( 52) 00:07:48.660 9326.277 - 9376.689: 5.7292% ( 55) 00:07:48.660 9376.689 - 9427.102: 6.3039% ( 64) 00:07:48.660 9427.102 - 9477.514: 7.0312% ( 81) 00:07:48.660 9477.514 - 9527.926: 7.7317% ( 78) 00:07:48.660 9527.926 - 9578.338: 8.4860% ( 84) 00:07:48.660 9578.338 - 9628.751: 9.4917% ( 112) 00:07:48.660 9628.751 - 9679.163: 10.4975% ( 112) 00:07:48.660 9679.163 - 9729.575: 11.7098% ( 135) 00:07:48.660 9729.575 - 9779.988: 12.9939% ( 143) 00:07:48.660 9779.988 - 9830.400: 14.4756% ( 165) 00:07:48.660 9830.400 - 9880.812: 16.2087% ( 193) 00:07:48.660 9880.812 - 9931.225: 18.1573% ( 217) 00:07:48.660 9931.225 - 9981.637: 20.3305% ( 242) 00:07:48.660 9981.637 - 10032.049: 22.5934% ( 252) 00:07:48.660 10032.049 - 10082.462: 24.8743% ( 254) 00:07:48.660 10082.462 - 10132.874: 27.2091% ( 260) 00:07:48.660 10132.874 - 10183.286: 29.7144% ( 279) 00:07:48.660 10183.286 - 10233.698: 32.3904% ( 298) 00:07:48.660 10233.698 - 10284.111: 35.0754% ( 299) 00:07:48.660 10284.111 - 10334.523: 37.7874% ( 302) 00:07:48.660 10334.523 - 10384.935: 40.6430% ( 318) 00:07:48.660 10384.935 - 10435.348: 43.6602% ( 336) 00:07:48.660 10435.348 - 10485.760: 46.3721% ( 302) 00:07:48.660 10485.760 - 10536.172: 49.2277% ( 318) 00:07:48.660 10536.172 - 10586.585: 51.9397% ( 302) 00:07:48.660 10586.585 - 10636.997: 54.7144% ( 309) 00:07:48.660 10636.997 - 10687.409: 57.1121% ( 267) 00:07:48.660 10687.409 - 10737.822: 59.3391% ( 248) 00:07:48.660 10737.822 - 10788.234: 61.5751% ( 249) 00:07:48.660 10788.234 - 10838.646: 63.6404% ( 230) 00:07:48.660 10838.646 - 10889.058: 65.4813% ( 205) 00:07:48.660 10889.058 - 10939.471: 67.2593% ( 198) 00:07:48.660 10939.471 - 10989.883: 68.8039% ( 172) 00:07:48.660 10989.883 - 11040.295: 70.2317% ( 159) 00:07:48.660 11040.295 - 11090.708: 71.4978% ( 141) 00:07:48.660 11090.708 - 11141.120: 72.5036% ( 112) 00:07:48.660 11141.120 - 11191.532: 73.4734% ( 108) 00:07:48.660 11191.532 - 11241.945: 74.2726% ( 89) 00:07:48.660 11241.945 - 11292.357: 74.8473% ( 64) 00:07:48.660 11292.357 - 11342.769: 75.3951% ( 61) 00:07:48.660 11342.769 - 11393.182: 75.8441% ( 50) 00:07:48.660 11393.182 - 11443.594: 76.2572% ( 46) 00:07:48.660 11443.594 - 11494.006: 76.6074% ( 39) 00:07:48.660 11494.006 - 11544.418: 76.8678% ( 29) 00:07:48.660 11544.418 - 11594.831: 77.1103% ( 27) 00:07:48.660 11594.831 - 11645.243: 77.3168% ( 23) 00:07:48.660 11645.243 - 11695.655: 77.4874% ( 19) 00:07:48.660 11695.655 - 11746.068: 77.6401% ( 17) 00:07:48.660 11746.068 - 11796.480: 77.7029% ( 7) 00:07:48.660 11796.480 - 11846.892: 77.7299% ( 3) 00:07:48.660 11846.892 - 11897.305: 77.7748% ( 5) 00:07:48.660 11897.305 - 11947.717: 77.8197% ( 5) 00:07:48.660 11947.717 - 11998.129: 77.8556% ( 4) 00:07:48.660 11998.129 - 12048.542: 77.9095% ( 6) 00:07:48.660 12048.542 - 12098.954: 77.9903% ( 9) 00:07:48.660 12098.954 - 12149.366: 78.0801% ( 10) 00:07:48.660 12149.366 - 12199.778: 78.1968% ( 13) 00:07:48.660 12199.778 - 12250.191: 78.3226% ( 14) 00:07:48.660 12250.191 - 12300.603: 78.4573% ( 15) 00:07:48.660 12300.603 - 12351.015: 78.5650% ( 12) 00:07:48.660 12351.015 - 12401.428: 78.7177% ( 17) 00:07:48.660 12401.428 - 12451.840: 78.8883% ( 19) 00:07:48.660 12451.840 - 12502.252: 79.0499% ( 18) 00:07:48.660 12502.252 - 12552.665: 79.2295% ( 20) 00:07:48.660 12552.665 - 12603.077: 79.3822% ( 17) 00:07:48.660 12603.077 - 12653.489: 79.4989% ( 13) 00:07:48.660 12653.489 - 12703.902: 79.6606% ( 18) 00:07:48.660 12703.902 - 12754.314: 79.8132% ( 17) 00:07:48.660 12754.314 - 12804.726: 79.9749% ( 18) 00:07:48.660 12804.726 - 12855.138: 80.1096% ( 15) 00:07:48.660 12855.138 - 12905.551: 80.2981% ( 21) 00:07:48.660 12905.551 - 13006.375: 80.7292% ( 48) 00:07:48.660 13006.375 - 13107.200: 81.2320% ( 56) 00:07:48.660 13107.200 - 13208.025: 81.7170% ( 54) 00:07:48.660 13208.025 - 13308.849: 82.2827% ( 63) 00:07:48.660 13308.849 - 13409.674: 82.8305% ( 61) 00:07:48.660 13409.674 - 13510.498: 83.4411% ( 68) 00:07:48.660 13510.498 - 13611.323: 83.9529% ( 57) 00:07:48.660 13611.323 - 13712.148: 84.4558% ( 56) 00:07:48.660 13712.148 - 13812.972: 84.8330% ( 42) 00:07:48.660 13812.972 - 13913.797: 85.2011% ( 41) 00:07:48.660 13913.797 - 14014.622: 85.6052% ( 45) 00:07:48.661 14014.622 - 14115.446: 85.9644% ( 40) 00:07:48.661 14115.446 - 14216.271: 86.3506% ( 43) 00:07:48.661 14216.271 - 14317.095: 86.7367% ( 43) 00:07:48.661 14317.095 - 14417.920: 87.0510% ( 35) 00:07:48.661 14417.920 - 14518.745: 87.3563% ( 34) 00:07:48.661 14518.745 - 14619.569: 87.6616% ( 34) 00:07:48.661 14619.569 - 14720.394: 88.0029% ( 38) 00:07:48.661 14720.394 - 14821.218: 88.3890% ( 43) 00:07:48.661 14821.218 - 14922.043: 88.8290% ( 49) 00:07:48.661 14922.043 - 15022.868: 89.2421% ( 46) 00:07:48.661 15022.868 - 15123.692: 89.6731% ( 48) 00:07:48.661 15123.692 - 15224.517: 90.0952% ( 47) 00:07:48.661 15224.517 - 15325.342: 90.4813% ( 43) 00:07:48.661 15325.342 - 15426.166: 90.8315% ( 39) 00:07:48.661 15426.166 - 15526.991: 91.0740% ( 27) 00:07:48.661 15526.991 - 15627.815: 91.2626% ( 21) 00:07:48.661 15627.815 - 15728.640: 91.4422% ( 20) 00:07:48.661 15728.640 - 15829.465: 91.6487% ( 23) 00:07:48.661 15829.465 - 15930.289: 91.9181% ( 30) 00:07:48.661 15930.289 - 16031.114: 92.2683% ( 39) 00:07:48.661 16031.114 - 16131.938: 92.6545% ( 43) 00:07:48.661 16131.938 - 16232.763: 92.9867% ( 37) 00:07:48.661 16232.763 - 16333.588: 93.4177% ( 48) 00:07:48.661 16333.588 - 16434.412: 93.8039% ( 43) 00:07:48.661 16434.412 - 16535.237: 94.2259% ( 47) 00:07:48.661 16535.237 - 16636.062: 94.6659% ( 49) 00:07:48.661 16636.062 - 16736.886: 95.0611% ( 44) 00:07:48.661 16736.886 - 16837.711: 95.3664% ( 34) 00:07:48.661 16837.711 - 16938.535: 95.6537% ( 32) 00:07:48.661 16938.535 - 17039.360: 95.8962% ( 27) 00:07:48.661 17039.360 - 17140.185: 96.1566% ( 29) 00:07:48.661 17140.185 - 17241.009: 96.3542% ( 22) 00:07:48.661 17241.009 - 17341.834: 96.5517% ( 22) 00:07:48.661 17341.834 - 17442.658: 96.7583% ( 23) 00:07:48.661 17442.658 - 17543.483: 96.8930% ( 15) 00:07:48.661 17543.483 - 17644.308: 96.9468% ( 6) 00:07:48.661 17644.308 - 17745.132: 97.0007% ( 6) 00:07:48.661 17745.132 - 17845.957: 97.0636% ( 7) 00:07:48.661 17845.957 - 17946.782: 97.1175% ( 6) 00:07:48.661 17946.782 - 18047.606: 97.1264% ( 1) 00:07:48.661 18249.255 - 18350.080: 97.1354% ( 1) 00:07:48.661 18350.080 - 18450.905: 97.1713% ( 4) 00:07:48.661 18450.905 - 18551.729: 97.2252% ( 6) 00:07:48.661 18551.729 - 18652.554: 97.2791% ( 6) 00:07:48.661 18652.554 - 18753.378: 97.3420% ( 7) 00:07:48.661 18753.378 - 18854.203: 97.3958% ( 6) 00:07:48.661 18854.203 - 18955.028: 97.4497% ( 6) 00:07:48.661 18955.028 - 19055.852: 97.5126% ( 7) 00:07:48.661 19055.852 - 19156.677: 97.6383% ( 14) 00:07:48.661 19156.677 - 19257.502: 97.7460% ( 12) 00:07:48.661 19257.502 - 19358.326: 97.8718% ( 14) 00:07:48.661 19358.326 - 19459.151: 97.9436% ( 8) 00:07:48.661 19459.151 - 19559.975: 97.9975% ( 6) 00:07:48.661 19559.975 - 19660.800: 98.0603% ( 7) 00:07:48.661 19660.800 - 19761.625: 98.1591% ( 11) 00:07:48.661 19761.625 - 19862.449: 98.2759% ( 13) 00:07:48.661 19862.449 - 19963.274: 98.3926% ( 13) 00:07:48.661 19963.274 - 20064.098: 98.4914% ( 11) 00:07:48.661 20064.098 - 20164.923: 98.5542% ( 7) 00:07:48.661 20164.923 - 20265.748: 98.6081% ( 6) 00:07:48.661 20265.748 - 20366.572: 98.6620% ( 6) 00:07:48.661 20366.572 - 20467.397: 98.7159% ( 6) 00:07:48.661 20467.397 - 20568.222: 98.7698% ( 6) 00:07:48.661 20568.222 - 20669.046: 98.8326% ( 7) 00:07:48.661 20669.046 - 20769.871: 98.8506% ( 2) 00:07:48.661 21576.468 - 21677.292: 98.8596% ( 1) 00:07:48.661 21677.292 - 21778.117: 98.8955% ( 4) 00:07:48.661 21778.117 - 21878.942: 98.9314% ( 4) 00:07:48.661 21878.942 - 21979.766: 98.9673% ( 4) 00:07:48.661 21979.766 - 22080.591: 99.0122% ( 5) 00:07:48.661 22080.591 - 22181.415: 99.0481% ( 4) 00:07:48.661 22181.415 - 22282.240: 99.0841% ( 4) 00:07:48.661 22282.240 - 22383.065: 99.1200% ( 4) 00:07:48.661 22383.065 - 22483.889: 99.1559% ( 4) 00:07:48.661 22483.889 - 22584.714: 99.1918% ( 4) 00:07:48.661 22584.714 - 22685.538: 99.2367% ( 5) 00:07:48.661 22685.538 - 22786.363: 99.2726% ( 4) 00:07:48.661 22786.363 - 22887.188: 99.3085% ( 4) 00:07:48.661 22887.188 - 22988.012: 99.3445% ( 4) 00:07:48.661 22988.012 - 23088.837: 99.3894% ( 5) 00:07:48.661 23088.837 - 23189.662: 99.4253% ( 4) 00:07:48.661 27625.945 - 27827.594: 99.4612% ( 4) 00:07:48.661 27827.594 - 28029.243: 99.5420% ( 9) 00:07:48.661 28029.243 - 28230.892: 99.6139% ( 8) 00:07:48.661 28230.892 - 28432.542: 99.6857% ( 8) 00:07:48.661 28432.542 - 28634.191: 99.7575% ( 8) 00:07:48.661 28634.191 - 28835.840: 99.8384% ( 9) 00:07:48.661 28835.840 - 29037.489: 99.9102% ( 8) 00:07:48.661 29037.489 - 29239.138: 99.9910% ( 9) 00:07:48.661 29239.138 - 29440.788: 100.0000% ( 1) 00:07:48.661 00:07:48.661 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.661 ============================================================================== 00:07:48.661 Range in us Cumulative IO count 00:07:48.661 6704.837 - 6755.249: 0.0268% ( 3) 00:07:48.661 6755.249 - 6805.662: 0.0893% ( 7) 00:07:48.661 6805.662 - 6856.074: 0.1071% ( 2) 00:07:48.661 6856.074 - 6906.486: 0.1339% ( 3) 00:07:48.661 6906.486 - 6956.898: 0.1786% ( 5) 00:07:48.661 6956.898 - 7007.311: 0.2054% ( 3) 00:07:48.661 7007.311 - 7057.723: 0.2500% ( 5) 00:07:48.661 7057.723 - 7108.135: 0.2857% ( 4) 00:07:48.661 7108.135 - 7158.548: 0.3125% ( 3) 00:07:48.661 7158.548 - 7208.960: 0.3482% ( 4) 00:07:48.661 7208.960 - 7259.372: 0.3750% ( 3) 00:07:48.661 7259.372 - 7309.785: 0.4107% ( 4) 00:07:48.661 7309.785 - 7360.197: 0.4375% ( 3) 00:07:48.661 7360.197 - 7410.609: 0.4643% ( 3) 00:07:48.661 7410.609 - 7461.022: 0.5000% ( 4) 00:07:48.661 7461.022 - 7511.434: 0.5268% ( 3) 00:07:48.661 7511.434 - 7561.846: 0.5625% ( 4) 00:07:48.661 7561.846 - 7612.258: 0.5714% ( 1) 00:07:48.661 8015.557 - 8065.969: 0.5982% ( 3) 00:07:48.661 8065.969 - 8116.382: 0.6696% ( 8) 00:07:48.661 8116.382 - 8166.794: 0.7411% ( 8) 00:07:48.661 8166.794 - 8217.206: 0.7946% ( 6) 00:07:48.661 8217.206 - 8267.618: 0.8393% ( 5) 00:07:48.661 8267.618 - 8318.031: 0.9196% ( 9) 00:07:48.661 8318.031 - 8368.443: 0.9911% ( 8) 00:07:48.661 8368.443 - 8418.855: 1.0446% ( 6) 00:07:48.661 8418.855 - 8469.268: 1.1339% ( 10) 00:07:48.661 8469.268 - 8519.680: 1.2411% ( 12) 00:07:48.661 8519.680 - 8570.092: 1.3393% ( 11) 00:07:48.661 8570.092 - 8620.505: 1.5000% ( 18) 00:07:48.661 8620.505 - 8670.917: 1.6607% ( 18) 00:07:48.661 8670.917 - 8721.329: 1.7768% ( 13) 00:07:48.661 8721.329 - 8771.742: 1.9643% ( 21) 00:07:48.661 8771.742 - 8822.154: 2.1786% ( 24) 00:07:48.661 8822.154 - 8872.566: 2.3750% ( 22) 00:07:48.661 8872.566 - 8922.978: 2.6429% ( 30) 00:07:48.661 8922.978 - 8973.391: 2.8571% ( 24) 00:07:48.661 8973.391 - 9023.803: 3.0446% ( 21) 00:07:48.661 9023.803 - 9074.215: 3.2679% ( 25) 00:07:48.661 9074.215 - 9124.628: 3.5357% ( 30) 00:07:48.661 9124.628 - 9175.040: 3.8750% ( 38) 00:07:48.661 9175.040 - 9225.452: 4.2411% ( 41) 00:07:48.661 9225.452 - 9275.865: 4.5982% ( 40) 00:07:48.661 9275.865 - 9326.277: 4.9911% ( 44) 00:07:48.661 9326.277 - 9376.689: 5.4196% ( 48) 00:07:48.661 9376.689 - 9427.102: 6.0000% ( 65) 00:07:48.661 9427.102 - 9477.514: 6.5714% ( 64) 00:07:48.661 9477.514 - 9527.926: 7.2321% ( 74) 00:07:48.661 9527.926 - 9578.338: 8.0714% ( 94) 00:07:48.661 9578.338 - 9628.751: 9.1161% ( 117) 00:07:48.661 9628.751 - 9679.163: 10.1161% ( 112) 00:07:48.661 9679.163 - 9729.575: 11.2500% ( 127) 00:07:48.661 9729.575 - 9779.988: 12.7679% ( 170) 00:07:48.661 9779.988 - 9830.400: 14.4196% ( 185) 00:07:48.661 9830.400 - 9880.812: 16.1339% ( 192) 00:07:48.661 9880.812 - 9931.225: 17.7143% ( 177) 00:07:48.661 9931.225 - 9981.637: 19.6071% ( 212) 00:07:48.661 9981.637 - 10032.049: 22.0000% ( 268) 00:07:48.661 10032.049 - 10082.462: 24.3482% ( 263) 00:07:48.661 10082.462 - 10132.874: 26.7946% ( 274) 00:07:48.661 10132.874 - 10183.286: 29.3571% ( 287) 00:07:48.661 10183.286 - 10233.698: 32.2321% ( 322) 00:07:48.661 10233.698 - 10284.111: 35.1339% ( 325) 00:07:48.661 10284.111 - 10334.523: 38.1429% ( 337) 00:07:48.661 10334.523 - 10384.935: 41.2321% ( 346) 00:07:48.661 10384.935 - 10435.348: 44.1250% ( 324) 00:07:48.661 10435.348 - 10485.760: 47.0714% ( 330) 00:07:48.661 10485.760 - 10536.172: 49.8839% ( 315) 00:07:48.661 10536.172 - 10586.585: 52.6071% ( 305) 00:07:48.661 10586.585 - 10636.997: 55.0089% ( 269) 00:07:48.661 10636.997 - 10687.409: 57.3750% ( 265) 00:07:48.661 10687.409 - 10737.822: 60.0446% ( 299) 00:07:48.661 10737.822 - 10788.234: 62.3839% ( 262) 00:07:48.661 10788.234 - 10838.646: 64.5804% ( 246) 00:07:48.661 10838.646 - 10889.058: 66.5536% ( 221) 00:07:48.661 10889.058 - 10939.471: 68.2679% ( 192) 00:07:48.661 10939.471 - 10989.883: 69.9286% ( 186) 00:07:48.661 10989.883 - 11040.295: 71.2589% ( 149) 00:07:48.661 11040.295 - 11090.708: 72.5357% ( 143) 00:07:48.661 11090.708 - 11141.120: 73.6161% ( 121) 00:07:48.661 11141.120 - 11191.532: 74.4286% ( 91) 00:07:48.662 11191.532 - 11241.945: 75.1161% ( 77) 00:07:48.662 11241.945 - 11292.357: 75.6607% ( 61) 00:07:48.662 11292.357 - 11342.769: 76.1250% ( 52) 00:07:48.662 11342.769 - 11393.182: 76.5625% ( 49) 00:07:48.662 11393.182 - 11443.594: 76.9018% ( 38) 00:07:48.662 11443.594 - 11494.006: 77.1518% ( 28) 00:07:48.662 11494.006 - 11544.418: 77.3482% ( 22) 00:07:48.662 11544.418 - 11594.831: 77.4643% ( 13) 00:07:48.662 11594.831 - 11645.243: 77.5446% ( 9) 00:07:48.662 11645.243 - 11695.655: 77.6607% ( 13) 00:07:48.662 11695.655 - 11746.068: 77.7857% ( 14) 00:07:48.662 11746.068 - 11796.480: 77.8750% ( 10) 00:07:48.662 11796.480 - 11846.892: 77.9643% ( 10) 00:07:48.662 11846.892 - 11897.305: 78.0714% ( 12) 00:07:48.662 11897.305 - 11947.717: 78.1786% ( 12) 00:07:48.662 11947.717 - 11998.129: 78.3304% ( 17) 00:07:48.662 11998.129 - 12048.542: 78.4464% ( 13) 00:07:48.662 12048.542 - 12098.954: 78.5625% ( 13) 00:07:48.662 12098.954 - 12149.366: 78.6964% ( 15) 00:07:48.662 12149.366 - 12199.778: 78.8304% ( 15) 00:07:48.662 12199.778 - 12250.191: 78.9554% ( 14) 00:07:48.662 12250.191 - 12300.603: 79.1071% ( 17) 00:07:48.662 12300.603 - 12351.015: 79.2411% ( 15) 00:07:48.662 12351.015 - 12401.428: 79.3929% ( 17) 00:07:48.662 12401.428 - 12451.840: 79.5357% ( 16) 00:07:48.662 12451.840 - 12502.252: 79.6696% ( 15) 00:07:48.662 12502.252 - 12552.665: 79.7857% ( 13) 00:07:48.662 12552.665 - 12603.077: 79.9196% ( 15) 00:07:48.662 12603.077 - 12653.489: 80.0536% ( 15) 00:07:48.662 12653.489 - 12703.902: 80.1786% ( 14) 00:07:48.662 12703.902 - 12754.314: 80.3482% ( 19) 00:07:48.662 12754.314 - 12804.726: 80.5714% ( 25) 00:07:48.662 12804.726 - 12855.138: 80.7500% ( 20) 00:07:48.662 12855.138 - 12905.551: 80.9643% ( 24) 00:07:48.662 12905.551 - 13006.375: 81.4732% ( 57) 00:07:48.662 13006.375 - 13107.200: 81.8750% ( 45) 00:07:48.662 13107.200 - 13208.025: 82.2321% ( 40) 00:07:48.662 13208.025 - 13308.849: 82.6339% ( 45) 00:07:48.662 13308.849 - 13409.674: 83.0982% ( 52) 00:07:48.662 13409.674 - 13510.498: 83.6339% ( 60) 00:07:48.662 13510.498 - 13611.323: 84.0982% ( 52) 00:07:48.662 13611.323 - 13712.148: 84.5714% ( 53) 00:07:48.662 13712.148 - 13812.972: 85.0357% ( 52) 00:07:48.662 13812.972 - 13913.797: 85.3839% ( 39) 00:07:48.662 13913.797 - 14014.622: 85.6875% ( 34) 00:07:48.662 14014.622 - 14115.446: 86.0536% ( 41) 00:07:48.662 14115.446 - 14216.271: 86.3929% ( 38) 00:07:48.662 14216.271 - 14317.095: 86.8036% ( 46) 00:07:48.662 14317.095 - 14417.920: 87.1339% ( 37) 00:07:48.662 14417.920 - 14518.745: 87.6607% ( 59) 00:07:48.662 14518.745 - 14619.569: 88.1071% ( 50) 00:07:48.662 14619.569 - 14720.394: 88.4911% ( 43) 00:07:48.662 14720.394 - 14821.218: 88.8393% ( 39) 00:07:48.662 14821.218 - 14922.043: 89.1339% ( 33) 00:07:48.662 14922.043 - 15022.868: 89.3661% ( 26) 00:07:48.662 15022.868 - 15123.692: 89.6161% ( 28) 00:07:48.662 15123.692 - 15224.517: 89.9107% ( 33) 00:07:48.662 15224.517 - 15325.342: 90.1071% ( 22) 00:07:48.662 15325.342 - 15426.166: 90.3839% ( 31) 00:07:48.662 15426.166 - 15526.991: 90.6250% ( 27) 00:07:48.662 15526.991 - 15627.815: 90.8393% ( 24) 00:07:48.662 15627.815 - 15728.640: 91.1875% ( 39) 00:07:48.662 15728.640 - 15829.465: 91.3661% ( 20) 00:07:48.662 15829.465 - 15930.289: 91.5625% ( 22) 00:07:48.662 15930.289 - 16031.114: 91.8036% ( 27) 00:07:48.662 16031.114 - 16131.938: 92.1964% ( 44) 00:07:48.662 16131.938 - 16232.763: 92.6875% ( 55) 00:07:48.662 16232.763 - 16333.588: 93.1875% ( 56) 00:07:48.662 16333.588 - 16434.412: 93.6518% ( 52) 00:07:48.662 16434.412 - 16535.237: 94.0714% ( 47) 00:07:48.662 16535.237 - 16636.062: 94.4464% ( 42) 00:07:48.662 16636.062 - 16736.886: 94.8661% ( 47) 00:07:48.662 16736.886 - 16837.711: 95.2768% ( 46) 00:07:48.662 16837.711 - 16938.535: 95.6250% ( 39) 00:07:48.662 16938.535 - 17039.360: 95.9643% ( 38) 00:07:48.662 17039.360 - 17140.185: 96.1875% ( 25) 00:07:48.662 17140.185 - 17241.009: 96.3839% ( 22) 00:07:48.662 17241.009 - 17341.834: 96.5982% ( 24) 00:07:48.662 17341.834 - 17442.658: 96.7857% ( 21) 00:07:48.662 17442.658 - 17543.483: 96.9196% ( 15) 00:07:48.662 17543.483 - 17644.308: 97.0714% ( 17) 00:07:48.662 17644.308 - 17745.132: 97.1875% ( 13) 00:07:48.662 17745.132 - 17845.957: 97.3393% ( 17) 00:07:48.662 17845.957 - 17946.782: 97.5804% ( 27) 00:07:48.662 17946.782 - 18047.606: 97.8036% ( 25) 00:07:48.662 18047.606 - 18148.431: 97.9821% ( 20) 00:07:48.662 18148.431 - 18249.255: 98.1518% ( 19) 00:07:48.662 18249.255 - 18350.080: 98.3214% ( 19) 00:07:48.662 18350.080 - 18450.905: 98.4554% ( 15) 00:07:48.662 18450.905 - 18551.729: 98.5625% ( 12) 00:07:48.662 18551.729 - 18652.554: 98.6607% ( 11) 00:07:48.662 18652.554 - 18753.378: 98.7679% ( 12) 00:07:48.662 18753.378 - 18854.203: 98.8304% ( 7) 00:07:48.662 18854.203 - 18955.028: 98.8571% ( 3) 00:07:48.662 19761.625 - 19862.449: 98.8839% ( 3) 00:07:48.662 19862.449 - 19963.274: 98.9464% ( 7) 00:07:48.662 19963.274 - 20064.098: 99.0000% ( 6) 00:07:48.662 20064.098 - 20164.923: 99.0536% ( 6) 00:07:48.662 20164.923 - 20265.748: 99.1071% ( 6) 00:07:48.662 20265.748 - 20366.572: 99.1607% ( 6) 00:07:48.662 20366.572 - 20467.397: 99.2232% ( 7) 00:07:48.662 20467.397 - 20568.222: 99.2679% ( 5) 00:07:48.662 20568.222 - 20669.046: 99.3214% ( 6) 00:07:48.662 20669.046 - 20769.871: 99.3750% ( 6) 00:07:48.662 20769.871 - 20870.695: 99.4286% ( 6) 00:07:48.662 21072.345 - 21173.169: 99.4464% ( 2) 00:07:48.662 21173.169 - 21273.994: 99.4821% ( 4) 00:07:48.662 21273.994 - 21374.818: 99.5179% ( 4) 00:07:48.662 21374.818 - 21475.643: 99.5625% ( 5) 00:07:48.662 21475.643 - 21576.468: 99.5982% ( 4) 00:07:48.662 21576.468 - 21677.292: 99.6339% ( 4) 00:07:48.662 21677.292 - 21778.117: 99.6696% ( 4) 00:07:48.662 21778.117 - 21878.942: 99.7054% ( 4) 00:07:48.662 21878.942 - 21979.766: 99.7411% ( 4) 00:07:48.662 21979.766 - 22080.591: 99.7768% ( 4) 00:07:48.662 22080.591 - 22181.415: 99.8125% ( 4) 00:07:48.662 22181.415 - 22282.240: 99.8482% ( 4) 00:07:48.662 22282.240 - 22383.065: 99.8929% ( 5) 00:07:48.662 22383.065 - 22483.889: 99.9286% ( 4) 00:07:48.662 22483.889 - 22584.714: 99.9643% ( 4) 00:07:48.662 22584.714 - 22685.538: 100.0000% ( 4) 00:07:48.662 00:07:48.662 02:52:19 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:50.057 Initializing NVMe Controllers 00:07:50.057 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.057 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.057 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.057 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.057 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:50.057 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:50.057 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:50.057 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:50.057 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:50.057 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:50.057 Initialization complete. Launching workers. 00:07:50.057 ======================================================== 00:07:50.057 Latency(us) 00:07:50.057 Device Information : IOPS MiB/s Average min max 00:07:50.057 PCIE (0000:00:11.0) NSID 1 from core 0: 9248.36 108.38 13878.15 6348.76 36168.92 00:07:50.057 PCIE (0000:00:13.0) NSID 1 from core 0: 9248.36 108.38 13861.81 6389.85 35215.96 00:07:50.057 PCIE (0000:00:10.0) NSID 1 from core 0: 9248.36 108.38 13843.15 6407.13 34025.81 00:07:50.057 PCIE (0000:00:12.0) NSID 1 from core 0: 9248.36 108.38 13824.73 6307.02 32126.27 00:07:50.057 PCIE (0000:00:12.0) NSID 2 from core 0: 9248.36 108.38 13804.48 6325.71 31382.67 00:07:50.057 PCIE (0000:00:12.0) NSID 3 from core 0: 9312.14 109.13 13688.49 6419.06 23660.25 00:07:50.057 ======================================================== 00:07:50.057 Total : 55553.95 651.02 13816.66 6307.02 36168.92 00:07:50.057 00:07:50.057 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.057 ================================================================================= 00:07:50.057 1.00000% : 6856.074us 00:07:50.057 10.00000% : 10989.883us 00:07:50.057 25.00000% : 12300.603us 00:07:50.057 50.00000% : 13913.797us 00:07:50.057 75.00000% : 15325.342us 00:07:50.057 90.00000% : 16535.237us 00:07:50.057 95.00000% : 17241.009us 00:07:50.057 98.00000% : 18148.431us 00:07:50.057 99.00000% : 29037.489us 00:07:50.057 99.50000% : 34482.018us 00:07:50.057 99.90000% : 36095.212us 00:07:50.057 99.99000% : 36296.862us 00:07:50.057 99.99900% : 36296.862us 00:07:50.057 99.99990% : 36296.862us 00:07:50.057 99.99999% : 36296.862us 00:07:50.057 00:07:50.057 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.057 ================================================================================= 00:07:50.057 1.00000% : 6856.074us 00:07:50.057 10.00000% : 10737.822us 00:07:50.057 25.00000% : 12451.840us 00:07:50.057 50.00000% : 13913.797us 00:07:50.057 75.00000% : 15426.166us 00:07:50.057 90.00000% : 16535.237us 00:07:50.057 95.00000% : 17140.185us 00:07:50.057 98.00000% : 18249.255us 00:07:50.057 99.00000% : 27020.997us 00:07:50.057 99.50000% : 34280.369us 00:07:50.057 99.90000% : 35086.966us 00:07:50.057 99.99000% : 35288.615us 00:07:50.057 99.99900% : 35288.615us 00:07:50.057 99.99990% : 35288.615us 00:07:50.057 99.99999% : 35288.615us 00:07:50.057 00:07:50.057 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.057 ================================================================================= 00:07:50.057 1.00000% : 6755.249us 00:07:50.057 10.00000% : 10586.585us 00:07:50.057 25.00000% : 12451.840us 00:07:50.057 50.00000% : 14014.622us 00:07:50.057 75.00000% : 15426.166us 00:07:50.057 90.00000% : 16434.412us 00:07:50.057 95.00000% : 17039.360us 00:07:50.057 98.00000% : 18047.606us 00:07:50.057 99.00000% : 25105.329us 00:07:50.057 99.50000% : 32868.825us 00:07:50.057 99.90000% : 33877.071us 00:07:50.057 99.99000% : 34078.720us 00:07:50.057 99.99900% : 34078.720us 00:07:50.057 99.99990% : 34078.720us 00:07:50.057 99.99999% : 34078.720us 00:07:50.057 00:07:50.057 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.057 ================================================================================= 00:07:50.057 1.00000% : 6755.249us 00:07:50.057 10.00000% : 10687.409us 00:07:50.057 25.00000% : 12451.840us 00:07:50.057 50.00000% : 13913.797us 00:07:50.057 75.00000% : 15426.166us 00:07:50.057 90.00000% : 16434.412us 00:07:50.057 95.00000% : 17039.360us 00:07:50.057 98.00000% : 18350.080us 00:07:50.057 99.00000% : 23290.486us 00:07:50.057 99.50000% : 31053.982us 00:07:50.057 99.90000% : 32062.228us 00:07:50.057 99.99000% : 32263.877us 00:07:50.057 99.99900% : 32263.877us 00:07:50.057 99.99990% : 32263.877us 00:07:50.057 99.99999% : 32263.877us 00:07:50.057 00:07:50.057 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.057 ================================================================================= 00:07:50.057 1.00000% : 6906.486us 00:07:50.057 10.00000% : 10788.234us 00:07:50.057 25.00000% : 12300.603us 00:07:50.057 50.00000% : 13913.797us 00:07:50.057 75.00000% : 15426.166us 00:07:50.057 90.00000% : 16333.588us 00:07:50.057 95.00000% : 17241.009us 00:07:50.057 98.00000% : 18652.554us 00:07:50.057 99.00000% : 23088.837us 00:07:50.057 99.50000% : 30449.034us 00:07:50.057 99.90000% : 31255.631us 00:07:50.057 99.99000% : 31457.280us 00:07:50.057 99.99900% : 31457.280us 00:07:50.057 99.99990% : 31457.280us 00:07:50.057 99.99999% : 31457.280us 00:07:50.057 00:07:50.057 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.057 ================================================================================= 00:07:50.057 1.00000% : 6856.074us 00:07:50.057 10.00000% : 10989.883us 00:07:50.057 25.00000% : 12199.778us 00:07:50.057 50.00000% : 14014.622us 00:07:50.057 75.00000% : 15426.166us 00:07:50.057 90.00000% : 16535.237us 00:07:50.057 95.00000% : 17039.360us 00:07:50.057 98.00000% : 17845.957us 00:07:50.057 99.00000% : 18753.378us 00:07:50.057 99.50000% : 22584.714us 00:07:50.057 99.90000% : 23492.135us 00:07:50.057 99.99000% : 23693.785us 00:07:50.057 99.99900% : 23693.785us 00:07:50.057 99.99990% : 23693.785us 00:07:50.057 99.99999% : 23693.785us 00:07:50.057 00:07:50.057 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.057 ============================================================================== 00:07:50.057 Range in us Cumulative IO count 00:07:50.057 6326.745 - 6351.951: 0.0108% ( 1) 00:07:50.057 6427.569 - 6452.775: 0.0216% ( 1) 00:07:50.057 6553.600 - 6604.012: 0.0323% ( 1) 00:07:50.057 6604.012 - 6654.425: 0.1293% ( 9) 00:07:50.057 6654.425 - 6704.837: 0.2694% ( 13) 00:07:50.057 6704.837 - 6755.249: 0.4310% ( 15) 00:07:50.057 6755.249 - 6805.662: 0.7435% ( 29) 00:07:50.057 6805.662 - 6856.074: 1.0022% ( 24) 00:07:50.057 6856.074 - 6906.486: 1.1638% ( 15) 00:07:50.057 6906.486 - 6956.898: 1.2500% ( 8) 00:07:50.057 6956.898 - 7007.311: 1.3578% ( 10) 00:07:50.057 7007.311 - 7057.723: 1.3793% ( 2) 00:07:50.057 7914.732 - 7965.145: 1.4224% ( 4) 00:07:50.057 7965.145 - 8015.557: 1.5302% ( 10) 00:07:50.057 8015.557 - 8065.969: 1.7672% ( 22) 00:07:50.057 8065.969 - 8116.382: 2.0151% ( 23) 00:07:50.057 8116.382 - 8166.794: 2.6293% ( 57) 00:07:50.057 8166.794 - 8217.206: 2.9849% ( 33) 00:07:50.057 8217.206 - 8267.618: 3.2543% ( 25) 00:07:50.057 8267.618 - 8318.031: 3.5991% ( 32) 00:07:50.057 8318.031 - 8368.443: 3.8254% ( 21) 00:07:50.057 8368.443 - 8418.855: 3.9871% ( 15) 00:07:50.057 8418.855 - 8469.268: 4.2996% ( 29) 00:07:50.057 8469.268 - 8519.680: 4.5366% ( 22) 00:07:50.057 8519.680 - 8570.092: 4.7522% ( 20) 00:07:50.057 8570.092 - 8620.505: 4.9784% ( 21) 00:07:50.057 8620.505 - 8670.917: 5.2694% ( 27) 00:07:50.058 8670.917 - 8721.329: 5.7435% ( 44) 00:07:50.058 8721.329 - 8771.742: 5.9267% ( 17) 00:07:50.058 8771.742 - 8822.154: 6.0668% ( 13) 00:07:50.058 8822.154 - 8872.566: 6.1638% ( 9) 00:07:50.058 8872.566 - 8922.978: 6.2069% ( 4) 00:07:50.058 9124.628 - 9175.040: 6.2177% ( 1) 00:07:50.058 9175.040 - 9225.452: 6.2284% ( 1) 00:07:50.058 9326.277 - 9376.689: 6.2392% ( 1) 00:07:50.058 9376.689 - 9427.102: 6.2931% ( 5) 00:07:50.058 9427.102 - 9477.514: 6.3578% ( 6) 00:07:50.058 9477.514 - 9527.926: 6.4871% ( 12) 00:07:50.058 9527.926 - 9578.338: 6.8211% ( 31) 00:07:50.058 9578.338 - 9628.751: 6.9504% ( 12) 00:07:50.058 9628.751 - 9679.163: 7.0582% ( 10) 00:07:50.058 9679.163 - 9729.575: 7.3491% ( 27) 00:07:50.058 9729.575 - 9779.988: 7.4138% ( 6) 00:07:50.058 9779.988 - 9830.400: 7.4569% ( 4) 00:07:50.058 9830.400 - 9880.812: 7.4892% ( 3) 00:07:50.058 9880.812 - 9931.225: 7.5000% ( 1) 00:07:50.058 9931.225 - 9981.637: 7.5216% ( 2) 00:07:50.058 9981.637 - 10032.049: 7.5323% ( 1) 00:07:50.058 10032.049 - 10082.462: 7.5539% ( 2) 00:07:50.058 10082.462 - 10132.874: 7.5754% ( 2) 00:07:50.058 10132.874 - 10183.286: 7.5862% ( 1) 00:07:50.058 10233.698 - 10284.111: 7.5970% ( 1) 00:07:50.058 10384.935 - 10435.348: 7.6185% ( 2) 00:07:50.058 10435.348 - 10485.760: 7.6401% ( 2) 00:07:50.058 10485.760 - 10536.172: 7.7047% ( 6) 00:07:50.058 10536.172 - 10586.585: 7.7802% ( 7) 00:07:50.058 10586.585 - 10636.997: 7.9203% ( 13) 00:07:50.058 10636.997 - 10687.409: 8.1358% ( 20) 00:07:50.058 10687.409 - 10737.822: 8.4483% ( 29) 00:07:50.058 10737.822 - 10788.234: 8.7931% ( 32) 00:07:50.058 10788.234 - 10838.646: 9.1164% ( 30) 00:07:50.058 10838.646 - 10889.058: 9.6228% ( 47) 00:07:50.058 10889.058 - 10939.471: 9.9138% ( 27) 00:07:50.058 10939.471 - 10989.883: 10.2694% ( 33) 00:07:50.058 10989.883 - 11040.295: 10.7220% ( 42) 00:07:50.058 11040.295 - 11090.708: 11.2392% ( 48) 00:07:50.058 11090.708 - 11141.120: 11.8858% ( 60) 00:07:50.058 11141.120 - 11191.532: 12.5539% ( 62) 00:07:50.058 11191.532 - 11241.945: 13.0603% ( 47) 00:07:50.058 11241.945 - 11292.357: 13.4052% ( 32) 00:07:50.058 11292.357 - 11342.769: 13.7716% ( 34) 00:07:50.058 11342.769 - 11393.182: 14.2241% ( 42) 00:07:50.058 11393.182 - 11443.594: 14.7845% ( 52) 00:07:50.058 11443.594 - 11494.006: 15.3664% ( 54) 00:07:50.058 11494.006 - 11544.418: 15.9267% ( 52) 00:07:50.058 11544.418 - 11594.831: 16.3793% ( 42) 00:07:50.058 11594.831 - 11645.243: 16.7672% ( 36) 00:07:50.058 11645.243 - 11695.655: 17.0582% ( 27) 00:07:50.058 11695.655 - 11746.068: 17.3491% ( 27) 00:07:50.058 11746.068 - 11796.480: 17.7586% ( 38) 00:07:50.058 11796.480 - 11846.892: 18.3944% ( 59) 00:07:50.058 11846.892 - 11897.305: 18.9871% ( 55) 00:07:50.058 11897.305 - 11947.717: 19.6336% ( 60) 00:07:50.058 11947.717 - 11998.129: 20.3879% ( 70) 00:07:50.058 11998.129 - 12048.542: 21.3901% ( 93) 00:07:50.058 12048.542 - 12098.954: 22.3922% ( 93) 00:07:50.058 12098.954 - 12149.366: 23.1250% ( 68) 00:07:50.058 12149.366 - 12199.778: 23.8685% ( 69) 00:07:50.058 12199.778 - 12250.191: 24.4504% ( 54) 00:07:50.058 12250.191 - 12300.603: 25.0216% ( 53) 00:07:50.058 12300.603 - 12351.015: 25.6250% ( 56) 00:07:50.058 12351.015 - 12401.428: 26.2500% ( 58) 00:07:50.058 12401.428 - 12451.840: 26.8858% ( 59) 00:07:50.058 12451.840 - 12502.252: 27.5754% ( 64) 00:07:50.058 12502.252 - 12552.665: 28.2004% ( 58) 00:07:50.058 12552.665 - 12603.077: 28.8470% ( 60) 00:07:50.058 12603.077 - 12653.489: 29.3858% ( 50) 00:07:50.058 12653.489 - 12703.902: 29.9569% ( 53) 00:07:50.058 12703.902 - 12754.314: 30.3448% ( 36) 00:07:50.058 12754.314 - 12804.726: 30.8297% ( 45) 00:07:50.058 12804.726 - 12855.138: 31.4763% ( 60) 00:07:50.058 12855.138 - 12905.551: 32.0474% ( 53) 00:07:50.058 12905.551 - 13006.375: 33.5776% ( 142) 00:07:50.058 13006.375 - 13107.200: 35.4957% ( 178) 00:07:50.058 13107.200 - 13208.025: 37.1983% ( 158) 00:07:50.058 13208.025 - 13308.849: 39.0517% ( 172) 00:07:50.058 13308.849 - 13409.674: 41.1422% ( 194) 00:07:50.058 13409.674 - 13510.498: 43.1573% ( 187) 00:07:50.058 13510.498 - 13611.323: 45.3879% ( 207) 00:07:50.058 13611.323 - 13712.148: 47.2306% ( 171) 00:07:50.058 13712.148 - 13812.972: 48.9978% ( 164) 00:07:50.058 13812.972 - 13913.797: 50.8513% ( 172) 00:07:50.058 13913.797 - 14014.622: 52.8448% ( 185) 00:07:50.058 14014.622 - 14115.446: 54.8815% ( 189) 00:07:50.058 14115.446 - 14216.271: 56.6487% ( 164) 00:07:50.058 14216.271 - 14317.095: 58.1573% ( 140) 00:07:50.058 14317.095 - 14417.920: 59.8168% ( 154) 00:07:50.058 14417.920 - 14518.745: 61.1961% ( 128) 00:07:50.058 14518.745 - 14619.569: 62.7586% ( 145) 00:07:50.058 14619.569 - 14720.394: 64.4073% ( 153) 00:07:50.058 14720.394 - 14821.218: 66.1099% ( 158) 00:07:50.058 14821.218 - 14922.043: 68.1897% ( 193) 00:07:50.058 14922.043 - 15022.868: 70.2047% ( 187) 00:07:50.058 15022.868 - 15123.692: 72.4677% ( 210) 00:07:50.058 15123.692 - 15224.517: 74.2996% ( 170) 00:07:50.058 15224.517 - 15325.342: 75.9052% ( 149) 00:07:50.058 15325.342 - 15426.166: 77.2629% ( 126) 00:07:50.058 15426.166 - 15526.991: 78.7069% ( 134) 00:07:50.058 15526.991 - 15627.815: 80.0647% ( 126) 00:07:50.058 15627.815 - 15728.640: 81.4763% ( 131) 00:07:50.058 15728.640 - 15829.465: 82.7586% ( 119) 00:07:50.058 15829.465 - 15930.289: 84.0194% ( 117) 00:07:50.058 15930.289 - 16031.114: 85.1832% ( 108) 00:07:50.058 16031.114 - 16131.938: 86.3578% ( 109) 00:07:50.058 16131.938 - 16232.763: 87.5000% ( 106) 00:07:50.058 16232.763 - 16333.588: 88.2974% ( 74) 00:07:50.058 16333.588 - 16434.412: 89.2134% ( 85) 00:07:50.058 16434.412 - 16535.237: 90.2694% ( 98) 00:07:50.058 16535.237 - 16636.062: 91.4009% ( 105) 00:07:50.058 16636.062 - 16736.886: 92.2522% ( 79) 00:07:50.058 16736.886 - 16837.711: 93.0603% ( 75) 00:07:50.058 16837.711 - 16938.535: 93.9547% ( 83) 00:07:50.058 16938.535 - 17039.360: 94.5043% ( 51) 00:07:50.058 17039.360 - 17140.185: 94.9246% ( 39) 00:07:50.058 17140.185 - 17241.009: 95.3341% ( 38) 00:07:50.058 17241.009 - 17341.834: 95.7328% ( 37) 00:07:50.058 17341.834 - 17442.658: 96.1961% ( 43) 00:07:50.058 17442.658 - 17543.483: 96.5409% ( 32) 00:07:50.058 17543.483 - 17644.308: 96.7996% ( 24) 00:07:50.058 17644.308 - 17745.132: 97.0474% ( 23) 00:07:50.058 17745.132 - 17845.957: 97.4030% ( 33) 00:07:50.058 17845.957 - 17946.782: 97.6185% ( 20) 00:07:50.058 17946.782 - 18047.606: 97.8341% ( 20) 00:07:50.058 18047.606 - 18148.431: 98.0603% ( 21) 00:07:50.058 18148.431 - 18249.255: 98.2328% ( 16) 00:07:50.058 18249.255 - 18350.080: 98.3728% ( 13) 00:07:50.058 18350.080 - 18450.905: 98.4591% ( 8) 00:07:50.058 18450.905 - 18551.729: 98.5237% ( 6) 00:07:50.058 18551.729 - 18652.554: 98.5776% ( 5) 00:07:50.058 18652.554 - 18753.378: 98.6207% ( 4) 00:07:50.058 27424.295 - 27625.945: 98.6315% ( 1) 00:07:50.058 28230.892 - 28432.542: 98.7069% ( 7) 00:07:50.058 28432.542 - 28634.191: 98.8362% ( 12) 00:07:50.058 28634.191 - 28835.840: 98.9332% ( 9) 00:07:50.058 28835.840 - 29037.489: 99.0194% ( 8) 00:07:50.058 29037.489 - 29239.138: 99.1056% ( 8) 00:07:50.058 29239.138 - 29440.788: 99.1918% ( 8) 00:07:50.058 29440.788 - 29642.437: 99.2780% ( 8) 00:07:50.058 29642.437 - 29844.086: 99.3103% ( 3) 00:07:50.058 34078.720 - 34280.369: 99.3750% ( 6) 00:07:50.058 34280.369 - 34482.018: 99.5797% ( 19) 00:07:50.058 34482.018 - 34683.668: 99.6983% ( 11) 00:07:50.058 35288.615 - 35490.265: 99.7091% ( 1) 00:07:50.058 35490.265 - 35691.914: 99.7953% ( 8) 00:07:50.058 35691.914 - 35893.563: 99.8815% ( 8) 00:07:50.058 35893.563 - 36095.212: 99.9677% ( 8) 00:07:50.058 36095.212 - 36296.862: 100.0000% ( 3) 00:07:50.058 00:07:50.058 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.058 ============================================================================== 00:07:50.058 Range in us Cumulative IO count 00:07:50.058 6377.157 - 6402.363: 0.0108% ( 1) 00:07:50.058 6452.775 - 6503.188: 0.0431% ( 3) 00:07:50.058 6503.188 - 6553.600: 0.0862% ( 4) 00:07:50.058 6553.600 - 6604.012: 0.1509% ( 6) 00:07:50.058 6604.012 - 6654.425: 0.5388% ( 36) 00:07:50.058 6654.425 - 6704.837: 0.6250% ( 8) 00:07:50.058 6704.837 - 6755.249: 0.7651% ( 13) 00:07:50.058 6755.249 - 6805.662: 0.9591% ( 18) 00:07:50.058 6805.662 - 6856.074: 1.2500% ( 27) 00:07:50.058 6856.074 - 6906.486: 1.3254% ( 7) 00:07:50.058 6906.486 - 6956.898: 1.3578% ( 3) 00:07:50.058 6956.898 - 7007.311: 1.3793% ( 2) 00:07:50.058 7561.846 - 7612.258: 1.4009% ( 2) 00:07:50.058 7612.258 - 7662.671: 1.5086% ( 10) 00:07:50.058 7662.671 - 7713.083: 1.5841% ( 7) 00:07:50.058 7713.083 - 7763.495: 1.7026% ( 11) 00:07:50.058 7763.495 - 7813.908: 1.8427% ( 13) 00:07:50.058 7813.908 - 7864.320: 2.1444% ( 28) 00:07:50.058 7864.320 - 7914.732: 2.3599% ( 20) 00:07:50.058 7914.732 - 7965.145: 2.5647% ( 19) 00:07:50.058 7965.145 - 8015.557: 2.7909% ( 21) 00:07:50.058 8015.557 - 8065.969: 3.0496% ( 24) 00:07:50.058 8065.969 - 8116.382: 3.3190% ( 25) 00:07:50.058 8116.382 - 8166.794: 3.5453% ( 21) 00:07:50.058 8166.794 - 8217.206: 4.0948% ( 51) 00:07:50.058 8217.206 - 8267.618: 4.4504% ( 33) 00:07:50.058 8267.618 - 8318.031: 4.5474% ( 9) 00:07:50.058 8318.031 - 8368.443: 4.6444% ( 9) 00:07:50.058 8368.443 - 8418.855: 4.7306% ( 8) 00:07:50.058 8418.855 - 8469.268: 4.8168% ( 8) 00:07:50.058 8469.268 - 8519.680: 4.8276% ( 1) 00:07:50.058 8519.680 - 8570.092: 4.8384% ( 1) 00:07:50.058 8721.329 - 8771.742: 4.8599% ( 2) 00:07:50.059 8771.742 - 8822.154: 4.9246% ( 6) 00:07:50.059 8822.154 - 8872.566: 5.0431% ( 11) 00:07:50.059 8872.566 - 8922.978: 5.3556% ( 29) 00:07:50.059 8922.978 - 8973.391: 5.3987% ( 4) 00:07:50.059 8973.391 - 9023.803: 5.4310% ( 3) 00:07:50.059 9023.803 - 9074.215: 5.4741% ( 4) 00:07:50.059 9074.215 - 9124.628: 5.5496% ( 7) 00:07:50.059 9124.628 - 9175.040: 5.6466% ( 9) 00:07:50.059 9175.040 - 9225.452: 5.8297% ( 17) 00:07:50.059 9225.452 - 9275.865: 6.0237% ( 18) 00:07:50.059 9275.865 - 9326.277: 6.0884% ( 6) 00:07:50.059 9326.277 - 9376.689: 6.1422% ( 5) 00:07:50.059 9376.689 - 9427.102: 6.2716% ( 12) 00:07:50.059 9427.102 - 9477.514: 6.3685% ( 9) 00:07:50.059 9477.514 - 9527.926: 6.4763% ( 10) 00:07:50.059 9527.926 - 9578.338: 6.5733% ( 9) 00:07:50.059 9578.338 - 9628.751: 6.6918% ( 11) 00:07:50.059 9628.751 - 9679.163: 6.8319% ( 13) 00:07:50.059 9679.163 - 9729.575: 6.9828% ( 14) 00:07:50.059 9729.575 - 9779.988: 7.1659% ( 17) 00:07:50.059 9779.988 - 9830.400: 7.2737% ( 10) 00:07:50.059 9830.400 - 9880.812: 7.3384% ( 6) 00:07:50.059 9880.812 - 9931.225: 7.3922% ( 5) 00:07:50.059 9931.225 - 9981.637: 7.4461% ( 5) 00:07:50.059 9981.637 - 10032.049: 7.5000% ( 5) 00:07:50.059 10032.049 - 10082.462: 7.5431% ( 4) 00:07:50.059 10082.462 - 10132.874: 7.5862% ( 4) 00:07:50.059 10132.874 - 10183.286: 7.5970% ( 1) 00:07:50.059 10233.698 - 10284.111: 7.6078% ( 1) 00:07:50.059 10284.111 - 10334.523: 7.6832% ( 7) 00:07:50.059 10334.523 - 10384.935: 7.8664% ( 17) 00:07:50.059 10384.935 - 10435.348: 8.0496% ( 17) 00:07:50.059 10435.348 - 10485.760: 8.2759% ( 21) 00:07:50.059 10485.760 - 10536.172: 8.6530% ( 35) 00:07:50.059 10536.172 - 10586.585: 8.9009% ( 23) 00:07:50.059 10586.585 - 10636.997: 9.3534% ( 42) 00:07:50.059 10636.997 - 10687.409: 9.7306% ( 35) 00:07:50.059 10687.409 - 10737.822: 10.0754% ( 32) 00:07:50.059 10737.822 - 10788.234: 10.6034% ( 49) 00:07:50.059 10788.234 - 10838.646: 11.0560% ( 42) 00:07:50.059 10838.646 - 10889.058: 11.4116% ( 33) 00:07:50.059 10889.058 - 10939.471: 11.6487% ( 22) 00:07:50.059 10939.471 - 10989.883: 11.8642% ( 20) 00:07:50.059 10989.883 - 11040.295: 12.2414% ( 35) 00:07:50.059 11040.295 - 11090.708: 12.5539% ( 29) 00:07:50.059 11090.708 - 11141.120: 12.8987% ( 32) 00:07:50.059 11141.120 - 11191.532: 13.2435% ( 32) 00:07:50.059 11191.532 - 11241.945: 13.5560% ( 29) 00:07:50.059 11241.945 - 11292.357: 13.9978% ( 41) 00:07:50.059 11292.357 - 11342.769: 14.3858% ( 36) 00:07:50.059 11342.769 - 11393.182: 14.8276% ( 41) 00:07:50.059 11393.182 - 11443.594: 15.1724% ( 32) 00:07:50.059 11443.594 - 11494.006: 15.4634% ( 27) 00:07:50.059 11494.006 - 11544.418: 15.8621% ( 37) 00:07:50.059 11544.418 - 11594.831: 16.5409% ( 63) 00:07:50.059 11594.831 - 11645.243: 17.0905% ( 51) 00:07:50.059 11645.243 - 11695.655: 17.5754% ( 45) 00:07:50.059 11695.655 - 11746.068: 17.9634% ( 36) 00:07:50.059 11746.068 - 11796.480: 18.2866% ( 30) 00:07:50.059 11796.480 - 11846.892: 18.7284% ( 41) 00:07:50.059 11846.892 - 11897.305: 19.1810% ( 42) 00:07:50.059 11897.305 - 11947.717: 19.7522% ( 53) 00:07:50.059 11947.717 - 11998.129: 20.3125% ( 52) 00:07:50.059 11998.129 - 12048.542: 20.8621% ( 51) 00:07:50.059 12048.542 - 12098.954: 21.4224% ( 52) 00:07:50.059 12098.954 - 12149.366: 21.8534% ( 40) 00:07:50.059 12149.366 - 12199.778: 22.2737% ( 39) 00:07:50.059 12199.778 - 12250.191: 22.7371% ( 43) 00:07:50.059 12250.191 - 12300.603: 23.3513% ( 57) 00:07:50.059 12300.603 - 12351.015: 24.2026% ( 79) 00:07:50.059 12351.015 - 12401.428: 24.8815% ( 63) 00:07:50.059 12401.428 - 12451.840: 25.6897% ( 75) 00:07:50.059 12451.840 - 12502.252: 26.3362% ( 60) 00:07:50.059 12502.252 - 12552.665: 26.9181% ( 54) 00:07:50.059 12552.665 - 12603.077: 27.5323% ( 57) 00:07:50.059 12603.077 - 12653.489: 28.2759% ( 69) 00:07:50.059 12653.489 - 12703.902: 29.2349% ( 89) 00:07:50.059 12703.902 - 12754.314: 29.9784% ( 69) 00:07:50.059 12754.314 - 12804.726: 30.4741% ( 46) 00:07:50.059 12804.726 - 12855.138: 30.9591% ( 45) 00:07:50.059 12855.138 - 12905.551: 31.5194% ( 52) 00:07:50.059 12905.551 - 13006.375: 32.6940% ( 109) 00:07:50.059 13006.375 - 13107.200: 33.9332% ( 115) 00:07:50.059 13107.200 - 13208.025: 35.6789% ( 162) 00:07:50.059 13208.025 - 13308.849: 37.8987% ( 206) 00:07:50.059 13308.849 - 13409.674: 39.9461% ( 190) 00:07:50.059 13409.674 - 13510.498: 41.7565% ( 168) 00:07:50.059 13510.498 - 13611.323: 44.1595% ( 223) 00:07:50.059 13611.323 - 13712.148: 46.2823% ( 197) 00:07:50.059 13712.148 - 13812.972: 48.7177% ( 226) 00:07:50.059 13812.972 - 13913.797: 50.8190% ( 195) 00:07:50.059 13913.797 - 14014.622: 52.8556% ( 189) 00:07:50.059 14014.622 - 14115.446: 54.9353% ( 193) 00:07:50.059 14115.446 - 14216.271: 56.6595% ( 160) 00:07:50.059 14216.271 - 14317.095: 58.5237% ( 173) 00:07:50.059 14317.095 - 14417.920: 60.2371% ( 159) 00:07:50.059 14417.920 - 14518.745: 61.9935% ( 163) 00:07:50.059 14518.745 - 14619.569: 63.6315% ( 152) 00:07:50.059 14619.569 - 14720.394: 65.3664% ( 161) 00:07:50.059 14720.394 - 14821.218: 66.7780% ( 131) 00:07:50.059 14821.218 - 14922.043: 68.6638% ( 175) 00:07:50.059 14922.043 - 15022.868: 70.1940% ( 142) 00:07:50.059 15022.868 - 15123.692: 71.5733% ( 128) 00:07:50.059 15123.692 - 15224.517: 73.1681% ( 148) 00:07:50.059 15224.517 - 15325.342: 74.8922% ( 160) 00:07:50.059 15325.342 - 15426.166: 76.8858% ( 185) 00:07:50.059 15426.166 - 15526.991: 78.5776% ( 157) 00:07:50.059 15526.991 - 15627.815: 80.3233% ( 162) 00:07:50.059 15627.815 - 15728.640: 81.5194% ( 111) 00:07:50.059 15728.640 - 15829.465: 82.6832% ( 108) 00:07:50.059 15829.465 - 15930.289: 83.6207% ( 87) 00:07:50.059 15930.289 - 16031.114: 84.6875% ( 99) 00:07:50.059 16031.114 - 16131.938: 85.7974% ( 103) 00:07:50.059 16131.938 - 16232.763: 86.9073% ( 103) 00:07:50.059 16232.763 - 16333.588: 88.2435% ( 124) 00:07:50.059 16333.588 - 16434.412: 89.2780% ( 96) 00:07:50.059 16434.412 - 16535.237: 90.3341% ( 98) 00:07:50.059 16535.237 - 16636.062: 91.4116% ( 100) 00:07:50.059 16636.062 - 16736.886: 92.3707% ( 89) 00:07:50.059 16736.886 - 16837.711: 93.2543% ( 82) 00:07:50.059 16837.711 - 16938.535: 94.1056% ( 79) 00:07:50.059 16938.535 - 17039.360: 94.9030% ( 74) 00:07:50.059 17039.360 - 17140.185: 95.5280% ( 58) 00:07:50.059 17140.185 - 17241.009: 96.0022% ( 44) 00:07:50.059 17241.009 - 17341.834: 96.3901% ( 36) 00:07:50.059 17341.834 - 17442.658: 96.7241% ( 31) 00:07:50.059 17442.658 - 17543.483: 96.9612% ( 22) 00:07:50.059 17543.483 - 17644.308: 97.1659% ( 19) 00:07:50.059 17644.308 - 17745.132: 97.2953% ( 12) 00:07:50.059 17745.132 - 17845.957: 97.4030% ( 10) 00:07:50.059 17845.957 - 17946.782: 97.5108% ( 10) 00:07:50.059 17946.782 - 18047.606: 97.6293% ( 11) 00:07:50.059 18047.606 - 18148.431: 97.8233% ( 18) 00:07:50.059 18148.431 - 18249.255: 98.0280% ( 19) 00:07:50.059 18249.255 - 18350.080: 98.1573% ( 12) 00:07:50.059 18350.080 - 18450.905: 98.2543% ( 9) 00:07:50.059 18450.905 - 18551.729: 98.3621% ( 10) 00:07:50.059 18551.729 - 18652.554: 98.4375% ( 7) 00:07:50.059 18652.554 - 18753.378: 98.4914% ( 5) 00:07:50.059 18753.378 - 18854.203: 98.5560% ( 6) 00:07:50.059 18854.203 - 18955.028: 98.5991% ( 4) 00:07:50.059 18955.028 - 19055.852: 98.6207% ( 2) 00:07:50.059 25811.102 - 26012.751: 98.6530% ( 3) 00:07:50.059 26012.751 - 26214.400: 98.7392% ( 8) 00:07:50.059 26214.400 - 26416.049: 98.8254% ( 8) 00:07:50.059 26416.049 - 26617.698: 98.9116% ( 8) 00:07:50.059 26617.698 - 26819.348: 98.9978% ( 8) 00:07:50.059 26819.348 - 27020.997: 99.0733% ( 7) 00:07:50.059 27020.997 - 27222.646: 99.1703% ( 9) 00:07:50.059 27222.646 - 27424.295: 99.2565% ( 8) 00:07:50.059 27424.295 - 27625.945: 99.3103% ( 5) 00:07:50.059 33473.772 - 33675.422: 99.3211% ( 1) 00:07:50.059 33675.422 - 33877.071: 99.4073% ( 8) 00:07:50.059 33877.071 - 34078.720: 99.4828% ( 7) 00:07:50.059 34078.720 - 34280.369: 99.5690% ( 8) 00:07:50.059 34280.369 - 34482.018: 99.6552% ( 8) 00:07:50.059 34482.018 - 34683.668: 99.7522% ( 9) 00:07:50.059 34683.668 - 34885.317: 99.8384% ( 8) 00:07:50.059 34885.317 - 35086.966: 99.9353% ( 9) 00:07:50.059 35086.966 - 35288.615: 100.0000% ( 6) 00:07:50.059 00:07:50.059 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.059 ============================================================================== 00:07:50.059 Range in us Cumulative IO count 00:07:50.059 6402.363 - 6427.569: 0.0216% ( 2) 00:07:50.059 6427.569 - 6452.775: 0.0431% ( 2) 00:07:50.059 6452.775 - 6503.188: 0.1078% ( 6) 00:07:50.059 6503.188 - 6553.600: 0.1832% ( 7) 00:07:50.059 6553.600 - 6604.012: 0.4203% ( 22) 00:07:50.059 6604.012 - 6654.425: 0.6142% ( 18) 00:07:50.059 6654.425 - 6704.837: 0.8405% ( 21) 00:07:50.059 6704.837 - 6755.249: 1.0237% ( 17) 00:07:50.059 6755.249 - 6805.662: 1.1315% ( 10) 00:07:50.059 6805.662 - 6856.074: 1.2177% ( 8) 00:07:50.059 6856.074 - 6906.486: 1.3039% ( 8) 00:07:50.059 6906.486 - 6956.898: 1.3362% ( 3) 00:07:50.059 6956.898 - 7007.311: 1.3470% ( 1) 00:07:50.059 7007.311 - 7057.723: 1.3685% ( 2) 00:07:50.059 7057.723 - 7108.135: 1.3793% ( 1) 00:07:50.059 7158.548 - 7208.960: 1.3901% ( 1) 00:07:50.059 7208.960 - 7259.372: 1.4116% ( 2) 00:07:50.059 7259.372 - 7309.785: 1.4547% ( 4) 00:07:50.059 7360.197 - 7410.609: 1.4655% ( 1) 00:07:50.059 7410.609 - 7461.022: 1.4763% ( 1) 00:07:50.059 7461.022 - 7511.434: 1.5194% ( 4) 00:07:50.059 7511.434 - 7561.846: 1.7349% ( 20) 00:07:50.059 7561.846 - 7612.258: 1.9289% ( 18) 00:07:50.059 7612.258 - 7662.671: 2.1444% ( 20) 00:07:50.059 7662.671 - 7713.083: 2.2737% ( 12) 00:07:50.059 7713.083 - 7763.495: 2.4461% ( 16) 00:07:50.060 7763.495 - 7813.908: 2.5216% ( 7) 00:07:50.060 7813.908 - 7864.320: 2.5862% ( 6) 00:07:50.060 7864.320 - 7914.732: 2.7694% ( 17) 00:07:50.060 7914.732 - 7965.145: 2.8448% ( 7) 00:07:50.060 7965.145 - 8015.557: 3.0172% ( 16) 00:07:50.060 8015.557 - 8065.969: 3.1142% ( 9) 00:07:50.060 8065.969 - 8116.382: 3.3297% ( 20) 00:07:50.060 8116.382 - 8166.794: 3.7177% ( 36) 00:07:50.060 8166.794 - 8217.206: 3.7931% ( 7) 00:07:50.060 8217.206 - 8267.618: 3.8578% ( 6) 00:07:50.060 8267.618 - 8318.031: 3.9116% ( 5) 00:07:50.060 8318.031 - 8368.443: 3.9440% ( 3) 00:07:50.060 8368.443 - 8418.855: 3.9871% ( 4) 00:07:50.060 8418.855 - 8469.268: 4.0086% ( 2) 00:07:50.060 8469.268 - 8519.680: 4.0409% ( 3) 00:07:50.060 8519.680 - 8570.092: 4.0733% ( 3) 00:07:50.060 8570.092 - 8620.505: 4.1164% ( 4) 00:07:50.060 8620.505 - 8670.917: 4.1810% ( 6) 00:07:50.060 8670.917 - 8721.329: 4.2457% ( 6) 00:07:50.060 8721.329 - 8771.742: 4.3750% ( 12) 00:07:50.060 8771.742 - 8822.154: 4.5043% ( 12) 00:07:50.060 8822.154 - 8872.566: 4.5905% ( 8) 00:07:50.060 8872.566 - 8922.978: 4.6444% ( 5) 00:07:50.060 8922.978 - 8973.391: 4.6767% ( 3) 00:07:50.060 8973.391 - 9023.803: 4.7198% ( 4) 00:07:50.060 9023.803 - 9074.215: 4.7737% ( 5) 00:07:50.060 9074.215 - 9124.628: 4.7953% ( 2) 00:07:50.060 9124.628 - 9175.040: 4.8491% ( 5) 00:07:50.060 9175.040 - 9225.452: 4.9030% ( 5) 00:07:50.060 9225.452 - 9275.865: 4.9784% ( 7) 00:07:50.060 9326.277 - 9376.689: 5.1616% ( 17) 00:07:50.060 9376.689 - 9427.102: 5.3233% ( 15) 00:07:50.060 9427.102 - 9477.514: 5.6142% ( 27) 00:07:50.060 9477.514 - 9527.926: 5.7543% ( 13) 00:07:50.060 9527.926 - 9578.338: 5.8297% ( 7) 00:07:50.060 9578.338 - 9628.751: 5.9698% ( 13) 00:07:50.060 9628.751 - 9679.163: 6.0884% ( 11) 00:07:50.060 9679.163 - 9729.575: 6.1638% ( 7) 00:07:50.060 9729.575 - 9779.988: 6.3039% ( 13) 00:07:50.060 9779.988 - 9830.400: 6.4116% ( 10) 00:07:50.060 9830.400 - 9880.812: 6.4978% ( 8) 00:07:50.060 9880.812 - 9931.225: 6.7349% ( 22) 00:07:50.060 9931.225 - 9981.637: 7.0582% ( 30) 00:07:50.060 9981.637 - 10032.049: 7.2522% ( 18) 00:07:50.060 10032.049 - 10082.462: 7.4246% ( 16) 00:07:50.060 10082.462 - 10132.874: 7.6401% ( 20) 00:07:50.060 10132.874 - 10183.286: 7.8233% ( 17) 00:07:50.060 10183.286 - 10233.698: 8.0711% ( 23) 00:07:50.060 10233.698 - 10284.111: 8.2004% ( 12) 00:07:50.060 10284.111 - 10334.523: 8.4483% ( 23) 00:07:50.060 10334.523 - 10384.935: 8.7069% ( 24) 00:07:50.060 10384.935 - 10435.348: 9.1703% ( 43) 00:07:50.060 10435.348 - 10485.760: 9.4720% ( 28) 00:07:50.060 10485.760 - 10536.172: 9.7845% ( 29) 00:07:50.060 10536.172 - 10586.585: 10.0000% ( 20) 00:07:50.060 10586.585 - 10636.997: 10.2694% ( 25) 00:07:50.060 10636.997 - 10687.409: 10.5603% ( 27) 00:07:50.060 10687.409 - 10737.822: 10.8836% ( 30) 00:07:50.060 10737.822 - 10788.234: 11.0884% ( 19) 00:07:50.060 10788.234 - 10838.646: 11.4224% ( 31) 00:07:50.060 10838.646 - 10889.058: 11.6487% ( 21) 00:07:50.060 10889.058 - 10939.471: 11.9935% ( 32) 00:07:50.060 10939.471 - 10989.883: 12.3276% ( 31) 00:07:50.060 10989.883 - 11040.295: 12.8448% ( 48) 00:07:50.060 11040.295 - 11090.708: 13.2112% ( 34) 00:07:50.060 11090.708 - 11141.120: 13.4806% ( 25) 00:07:50.060 11141.120 - 11191.532: 13.7931% ( 29) 00:07:50.060 11191.532 - 11241.945: 14.1810% ( 36) 00:07:50.060 11241.945 - 11292.357: 14.5690% ( 36) 00:07:50.060 11292.357 - 11342.769: 14.8384% ( 25) 00:07:50.060 11342.769 - 11393.182: 15.4526% ( 57) 00:07:50.060 11393.182 - 11443.594: 15.7651% ( 29) 00:07:50.060 11443.594 - 11494.006: 16.1746% ( 38) 00:07:50.060 11494.006 - 11544.418: 16.4978% ( 30) 00:07:50.060 11544.418 - 11594.831: 16.8750% ( 35) 00:07:50.060 11594.831 - 11645.243: 17.2198% ( 32) 00:07:50.060 11645.243 - 11695.655: 17.6401% ( 39) 00:07:50.060 11695.655 - 11746.068: 18.0280% ( 36) 00:07:50.060 11746.068 - 11796.480: 18.5129% ( 45) 00:07:50.060 11796.480 - 11846.892: 18.9871% ( 44) 00:07:50.060 11846.892 - 11897.305: 19.4612% ( 44) 00:07:50.060 11897.305 - 11947.717: 19.9569% ( 46) 00:07:50.060 11947.717 - 11998.129: 20.4634% ( 47) 00:07:50.060 11998.129 - 12048.542: 20.9806% ( 48) 00:07:50.060 12048.542 - 12098.954: 21.5302% ( 51) 00:07:50.060 12098.954 - 12149.366: 22.0043% ( 44) 00:07:50.060 12149.366 - 12199.778: 22.4569% ( 42) 00:07:50.060 12199.778 - 12250.191: 23.0065% ( 51) 00:07:50.060 12250.191 - 12300.603: 23.6853% ( 63) 00:07:50.060 12300.603 - 12351.015: 24.3319% ( 60) 00:07:50.060 12351.015 - 12401.428: 24.9569% ( 58) 00:07:50.060 12401.428 - 12451.840: 25.4310% ( 44) 00:07:50.060 12451.840 - 12502.252: 26.0129% ( 54) 00:07:50.060 12502.252 - 12552.665: 26.5948% ( 54) 00:07:50.060 12552.665 - 12603.077: 27.2953% ( 65) 00:07:50.060 12603.077 - 12653.489: 27.8879% ( 55) 00:07:50.060 12653.489 - 12703.902: 28.5884% ( 65) 00:07:50.060 12703.902 - 12754.314: 29.3319% ( 69) 00:07:50.060 12754.314 - 12804.726: 29.9784% ( 60) 00:07:50.060 12804.726 - 12855.138: 30.5280% ( 51) 00:07:50.060 12855.138 - 12905.551: 31.1315% ( 56) 00:07:50.060 12905.551 - 13006.375: 32.5539% ( 132) 00:07:50.060 13006.375 - 13107.200: 34.3642% ( 168) 00:07:50.060 13107.200 - 13208.025: 36.5733% ( 205) 00:07:50.060 13208.025 - 13308.849: 38.5129% ( 180) 00:07:50.060 13308.849 - 13409.674: 40.5065% ( 185) 00:07:50.060 13409.674 - 13510.498: 43.0711% ( 238) 00:07:50.060 13510.498 - 13611.323: 45.3987% ( 216) 00:07:50.060 13611.323 - 13712.148: 46.8103% ( 131) 00:07:50.060 13712.148 - 13812.972: 48.2220% ( 131) 00:07:50.060 13812.972 - 13913.797: 49.9030% ( 156) 00:07:50.060 13913.797 - 14014.622: 51.8211% ( 178) 00:07:50.060 14014.622 - 14115.446: 54.5259% ( 251) 00:07:50.060 14115.446 - 14216.271: 56.8319% ( 214) 00:07:50.060 14216.271 - 14317.095: 58.5345% ( 158) 00:07:50.060 14317.095 - 14417.920: 60.0754% ( 143) 00:07:50.060 14417.920 - 14518.745: 61.8534% ( 165) 00:07:50.060 14518.745 - 14619.569: 63.3082% ( 135) 00:07:50.060 14619.569 - 14720.394: 64.9138% ( 149) 00:07:50.060 14720.394 - 14821.218: 66.5086% ( 148) 00:07:50.060 14821.218 - 14922.043: 68.1789% ( 155) 00:07:50.060 14922.043 - 15022.868: 69.8384% ( 154) 00:07:50.060 15022.868 - 15123.692: 70.9698% ( 105) 00:07:50.060 15123.692 - 15224.517: 72.2629% ( 120) 00:07:50.060 15224.517 - 15325.342: 73.6207% ( 126) 00:07:50.060 15325.342 - 15426.166: 75.1832% ( 145) 00:07:50.060 15426.166 - 15526.991: 76.7349% ( 144) 00:07:50.060 15526.991 - 15627.815: 78.4375% ( 158) 00:07:50.060 15627.815 - 15728.640: 80.1185% ( 156) 00:07:50.060 15728.640 - 15829.465: 81.3793% ( 117) 00:07:50.060 15829.465 - 15930.289: 82.8448% ( 136) 00:07:50.060 15930.289 - 16031.114: 84.5474% ( 158) 00:07:50.060 16031.114 - 16131.938: 85.9914% ( 134) 00:07:50.060 16131.938 - 16232.763: 87.5431% ( 144) 00:07:50.060 16232.763 - 16333.588: 88.9009% ( 126) 00:07:50.060 16333.588 - 16434.412: 90.1509% ( 116) 00:07:50.060 16434.412 - 16535.237: 91.2608% ( 103) 00:07:50.060 16535.237 - 16636.062: 92.4246% ( 108) 00:07:50.060 16636.062 - 16736.886: 93.2866% ( 80) 00:07:50.060 16736.886 - 16837.711: 94.2565% ( 90) 00:07:50.060 16837.711 - 16938.535: 94.8276% ( 53) 00:07:50.060 16938.535 - 17039.360: 95.3125% ( 45) 00:07:50.060 17039.360 - 17140.185: 95.7974% ( 45) 00:07:50.060 17140.185 - 17241.009: 96.1422% ( 32) 00:07:50.060 17241.009 - 17341.834: 96.5194% ( 35) 00:07:50.060 17341.834 - 17442.658: 96.7780% ( 24) 00:07:50.060 17442.658 - 17543.483: 96.9935% ( 20) 00:07:50.060 17543.483 - 17644.308: 97.2091% ( 20) 00:07:50.060 17644.308 - 17745.132: 97.4353% ( 21) 00:07:50.060 17745.132 - 17845.957: 97.6724% ( 22) 00:07:50.060 17845.957 - 17946.782: 97.8448% ( 16) 00:07:50.060 17946.782 - 18047.606: 98.0065% ( 15) 00:07:50.060 18047.606 - 18148.431: 98.1034% ( 9) 00:07:50.060 18148.431 - 18249.255: 98.1681% ( 6) 00:07:50.060 18249.255 - 18350.080: 98.2435% ( 7) 00:07:50.060 18350.080 - 18450.905: 98.2866% ( 4) 00:07:50.060 18450.905 - 18551.729: 98.3513% ( 6) 00:07:50.060 18551.729 - 18652.554: 98.3944% ( 4) 00:07:50.060 18652.554 - 18753.378: 98.4591% ( 6) 00:07:50.060 18753.378 - 18854.203: 98.5237% ( 6) 00:07:50.060 18854.203 - 18955.028: 98.5668% ( 4) 00:07:50.060 18955.028 - 19055.852: 98.6207% ( 5) 00:07:50.060 24097.083 - 24197.908: 98.6638% ( 4) 00:07:50.060 24197.908 - 24298.732: 98.6961% ( 3) 00:07:50.060 24298.732 - 24399.557: 98.7500% ( 5) 00:07:50.060 24399.557 - 24500.382: 98.7716% ( 2) 00:07:50.060 24500.382 - 24601.206: 98.8147% ( 4) 00:07:50.060 24601.206 - 24702.031: 98.8470% ( 3) 00:07:50.060 24702.031 - 24802.855: 98.9009% ( 5) 00:07:50.060 24802.855 - 24903.680: 98.9440% ( 4) 00:07:50.060 24903.680 - 25004.505: 98.9763% ( 3) 00:07:50.060 25004.505 - 25105.329: 99.0086% ( 3) 00:07:50.060 25105.329 - 25206.154: 99.0625% ( 5) 00:07:50.060 25206.154 - 25306.978: 99.0948% ( 3) 00:07:50.060 25306.978 - 25407.803: 99.1272% ( 3) 00:07:50.060 25407.803 - 25508.628: 99.1703% ( 4) 00:07:50.060 25508.628 - 25609.452: 99.2026% ( 3) 00:07:50.060 25609.452 - 25710.277: 99.2457% ( 4) 00:07:50.060 25710.277 - 25811.102: 99.2888% ( 4) 00:07:50.060 25811.102 - 26012.751: 99.3103% ( 2) 00:07:50.060 32062.228 - 32263.877: 99.3319% ( 2) 00:07:50.060 32263.877 - 32465.526: 99.3750% ( 4) 00:07:50.060 32465.526 - 32667.175: 99.4612% ( 8) 00:07:50.060 32667.175 - 32868.825: 99.5474% ( 8) 00:07:50.060 32868.825 - 33070.474: 99.6228% ( 7) 00:07:50.060 33070.474 - 33272.123: 99.7091% ( 8) 00:07:50.060 33272.123 - 33473.772: 99.7737% ( 6) 00:07:50.060 33473.772 - 33675.422: 99.8599% ( 8) 00:07:50.061 33675.422 - 33877.071: 99.9461% ( 8) 00:07:50.061 33877.071 - 34078.720: 100.0000% ( 5) 00:07:50.061 00:07:50.061 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.061 ============================================================================== 00:07:50.061 Range in us Cumulative IO count 00:07:50.061 6301.538 - 6326.745: 0.0108% ( 1) 00:07:50.061 6377.157 - 6402.363: 0.0216% ( 1) 00:07:50.061 6427.569 - 6452.775: 0.0323% ( 1) 00:07:50.061 6452.775 - 6503.188: 0.1509% ( 11) 00:07:50.061 6503.188 - 6553.600: 0.6142% ( 43) 00:07:50.061 6553.600 - 6604.012: 0.7112% ( 9) 00:07:50.061 6604.012 - 6654.425: 0.7759% ( 6) 00:07:50.061 6654.425 - 6704.837: 0.8944% ( 11) 00:07:50.061 6704.837 - 6755.249: 1.3039% ( 38) 00:07:50.061 6755.249 - 6805.662: 1.3578% ( 5) 00:07:50.061 6805.662 - 6856.074: 1.3793% ( 2) 00:07:50.061 7360.197 - 7410.609: 1.4547% ( 7) 00:07:50.061 7410.609 - 7461.022: 1.6056% ( 14) 00:07:50.061 7461.022 - 7511.434: 1.8103% ( 19) 00:07:50.061 7511.434 - 7561.846: 2.0582% ( 23) 00:07:50.061 7561.846 - 7612.258: 2.2091% ( 14) 00:07:50.061 7612.258 - 7662.671: 2.4138% ( 19) 00:07:50.061 7662.671 - 7713.083: 2.7047% ( 27) 00:07:50.061 7713.083 - 7763.495: 3.2004% ( 46) 00:07:50.061 7763.495 - 7813.908: 3.3944% ( 18) 00:07:50.061 7813.908 - 7864.320: 3.5776% ( 17) 00:07:50.061 7864.320 - 7914.732: 3.6638% ( 8) 00:07:50.061 7914.732 - 7965.145: 3.7716% ( 10) 00:07:50.061 7965.145 - 8015.557: 3.9116% ( 13) 00:07:50.061 8015.557 - 8065.969: 3.9871% ( 7) 00:07:50.061 8065.969 - 8116.382: 4.0517% ( 6) 00:07:50.061 8116.382 - 8166.794: 4.0948% ( 4) 00:07:50.061 8166.794 - 8217.206: 4.1272% ( 3) 00:07:50.061 8519.680 - 8570.092: 4.1379% ( 1) 00:07:50.061 9175.040 - 9225.452: 4.1810% ( 4) 00:07:50.061 9225.452 - 9275.865: 4.2565% ( 7) 00:07:50.061 9275.865 - 9326.277: 4.3319% ( 7) 00:07:50.061 9326.277 - 9376.689: 4.4828% ( 14) 00:07:50.061 9376.689 - 9427.102: 4.6013% ( 11) 00:07:50.061 9427.102 - 9477.514: 4.6444% ( 4) 00:07:50.061 9477.514 - 9527.926: 4.6767% ( 3) 00:07:50.061 9527.926 - 9578.338: 4.7091% ( 3) 00:07:50.061 9578.338 - 9628.751: 4.7306% ( 2) 00:07:50.061 9628.751 - 9679.163: 4.7737% ( 4) 00:07:50.061 9679.163 - 9729.575: 4.8060% ( 3) 00:07:50.061 9729.575 - 9779.988: 4.8599% ( 5) 00:07:50.061 9779.988 - 9830.400: 4.9246% ( 6) 00:07:50.061 9830.400 - 9880.812: 5.0539% ( 12) 00:07:50.061 9880.812 - 9931.225: 5.3125% ( 24) 00:07:50.061 9931.225 - 9981.637: 5.7543% ( 41) 00:07:50.061 9981.637 - 10032.049: 5.9591% ( 19) 00:07:50.061 10032.049 - 10082.462: 6.1099% ( 14) 00:07:50.061 10082.462 - 10132.874: 6.2716% ( 15) 00:07:50.061 10132.874 - 10183.286: 6.5733% ( 28) 00:07:50.061 10183.286 - 10233.698: 6.7241% ( 14) 00:07:50.061 10233.698 - 10284.111: 6.8966% ( 16) 00:07:50.061 10284.111 - 10334.523: 7.1013% ( 19) 00:07:50.061 10334.523 - 10384.935: 7.3491% ( 23) 00:07:50.061 10384.935 - 10435.348: 7.7802% ( 40) 00:07:50.061 10435.348 - 10485.760: 8.2004% ( 39) 00:07:50.061 10485.760 - 10536.172: 8.6207% ( 39) 00:07:50.061 10536.172 - 10586.585: 9.0194% ( 37) 00:07:50.061 10586.585 - 10636.997: 9.7091% ( 64) 00:07:50.061 10636.997 - 10687.409: 10.1185% ( 38) 00:07:50.061 10687.409 - 10737.822: 10.4741% ( 33) 00:07:50.061 10737.822 - 10788.234: 10.8513% ( 35) 00:07:50.061 10788.234 - 10838.646: 11.0991% ( 23) 00:07:50.061 10838.646 - 10889.058: 11.2931% ( 18) 00:07:50.061 10889.058 - 10939.471: 11.4871% ( 18) 00:07:50.061 10939.471 - 10989.883: 11.7457% ( 24) 00:07:50.061 10989.883 - 11040.295: 11.9612% ( 20) 00:07:50.061 11040.295 - 11090.708: 12.3276% ( 34) 00:07:50.061 11090.708 - 11141.120: 12.6724% ( 32) 00:07:50.061 11141.120 - 11191.532: 13.1789% ( 47) 00:07:50.061 11191.532 - 11241.945: 13.6853% ( 47) 00:07:50.061 11241.945 - 11292.357: 14.1164% ( 40) 00:07:50.061 11292.357 - 11342.769: 14.8922% ( 72) 00:07:50.061 11342.769 - 11393.182: 15.4203% ( 49) 00:07:50.061 11393.182 - 11443.594: 15.7866% ( 34) 00:07:50.061 11443.594 - 11494.006: 16.2284% ( 41) 00:07:50.061 11494.006 - 11544.418: 16.6595% ( 40) 00:07:50.061 11544.418 - 11594.831: 17.0151% ( 33) 00:07:50.061 11594.831 - 11645.243: 17.4138% ( 37) 00:07:50.061 11645.243 - 11695.655: 17.8233% ( 38) 00:07:50.061 11695.655 - 11746.068: 18.3190% ( 46) 00:07:50.061 11746.068 - 11796.480: 18.6853% ( 34) 00:07:50.061 11796.480 - 11846.892: 19.1056% ( 39) 00:07:50.061 11846.892 - 11897.305: 19.5474% ( 41) 00:07:50.061 11897.305 - 11947.717: 20.2909% ( 69) 00:07:50.061 11947.717 - 11998.129: 20.8082% ( 48) 00:07:50.061 11998.129 - 12048.542: 21.3578% ( 51) 00:07:50.061 12048.542 - 12098.954: 21.8858% ( 49) 00:07:50.061 12098.954 - 12149.366: 22.2414% ( 33) 00:07:50.061 12149.366 - 12199.778: 22.6509% ( 38) 00:07:50.061 12199.778 - 12250.191: 23.1789% ( 49) 00:07:50.061 12250.191 - 12300.603: 23.9332% ( 70) 00:07:50.061 12300.603 - 12351.015: 24.4612% ( 49) 00:07:50.061 12351.015 - 12401.428: 24.9353% ( 44) 00:07:50.061 12401.428 - 12451.840: 25.5603% ( 58) 00:07:50.061 12451.840 - 12502.252: 26.2931% ( 68) 00:07:50.061 12502.252 - 12552.665: 26.7349% ( 41) 00:07:50.061 12552.665 - 12603.077: 27.1767% ( 41) 00:07:50.061 12603.077 - 12653.489: 27.5754% ( 37) 00:07:50.061 12653.489 - 12703.902: 28.0819% ( 47) 00:07:50.061 12703.902 - 12754.314: 28.7177% ( 59) 00:07:50.061 12754.314 - 12804.726: 29.3966% ( 63) 00:07:50.061 12804.726 - 12855.138: 30.2694% ( 81) 00:07:50.061 12855.138 - 12905.551: 31.1853% ( 85) 00:07:50.061 12905.551 - 13006.375: 32.8987% ( 159) 00:07:50.061 13006.375 - 13107.200: 34.8384% ( 180) 00:07:50.061 13107.200 - 13208.025: 36.8642% ( 188) 00:07:50.061 13208.025 - 13308.849: 39.1164% ( 209) 00:07:50.061 13308.849 - 13409.674: 41.4224% ( 214) 00:07:50.061 13409.674 - 13510.498: 43.5237% ( 195) 00:07:50.061 13510.498 - 13611.323: 45.8297% ( 214) 00:07:50.061 13611.323 - 13712.148: 47.5970% ( 164) 00:07:50.061 13712.148 - 13812.972: 49.2780% ( 156) 00:07:50.061 13812.972 - 13913.797: 50.7328% ( 135) 00:07:50.061 13913.797 - 14014.622: 52.1983% ( 136) 00:07:50.061 14014.622 - 14115.446: 53.8039% ( 149) 00:07:50.061 14115.446 - 14216.271: 55.4418% ( 152) 00:07:50.061 14216.271 - 14317.095: 56.7780% ( 124) 00:07:50.061 14317.095 - 14417.920: 58.4806% ( 158) 00:07:50.061 14417.920 - 14518.745: 60.0970% ( 150) 00:07:50.061 14518.745 - 14619.569: 61.7134% ( 150) 00:07:50.061 14619.569 - 14720.394: 63.3190% ( 149) 00:07:50.061 14720.394 - 14821.218: 65.1509% ( 170) 00:07:50.061 14821.218 - 14922.043: 66.9289% ( 165) 00:07:50.061 14922.043 - 15022.868: 68.8147% ( 175) 00:07:50.061 15022.868 - 15123.692: 70.5819% ( 164) 00:07:50.061 15123.692 - 15224.517: 72.3276% ( 162) 00:07:50.061 15224.517 - 15325.342: 74.1056% ( 165) 00:07:50.061 15325.342 - 15426.166: 75.8944% ( 166) 00:07:50.061 15426.166 - 15526.991: 77.9095% ( 187) 00:07:50.061 15526.991 - 15627.815: 79.7198% ( 168) 00:07:50.061 15627.815 - 15728.640: 81.5409% ( 169) 00:07:50.061 15728.640 - 15829.465: 82.9849% ( 134) 00:07:50.061 15829.465 - 15930.289: 84.5474% ( 145) 00:07:50.061 15930.289 - 16031.114: 86.1530% ( 149) 00:07:50.061 16031.114 - 16131.938: 87.5862% ( 133) 00:07:50.061 16131.938 - 16232.763: 88.7931% ( 112) 00:07:50.061 16232.763 - 16333.588: 89.9138% ( 104) 00:07:50.061 16333.588 - 16434.412: 90.8621% ( 88) 00:07:50.061 16434.412 - 16535.237: 91.5517% ( 64) 00:07:50.061 16535.237 - 16636.062: 92.3599% ( 75) 00:07:50.061 16636.062 - 16736.886: 93.2328% ( 81) 00:07:50.061 16736.886 - 16837.711: 94.1164% ( 82) 00:07:50.061 16837.711 - 16938.535: 94.9892% ( 81) 00:07:50.061 16938.535 - 17039.360: 95.6681% ( 63) 00:07:50.061 17039.360 - 17140.185: 96.2284% ( 52) 00:07:50.061 17140.185 - 17241.009: 96.6056% ( 35) 00:07:50.061 17241.009 - 17341.834: 96.9504% ( 32) 00:07:50.061 17341.834 - 17442.658: 97.2953% ( 32) 00:07:50.061 17442.658 - 17543.483: 97.5862% ( 27) 00:07:50.061 17543.483 - 17644.308: 97.7586% ( 16) 00:07:50.061 17644.308 - 17745.132: 97.8233% ( 6) 00:07:50.061 17745.132 - 17845.957: 97.8556% ( 3) 00:07:50.061 17845.957 - 17946.782: 97.8879% ( 3) 00:07:50.061 17946.782 - 18047.606: 97.9203% ( 3) 00:07:50.061 18047.606 - 18148.431: 97.9310% ( 1) 00:07:50.061 18148.431 - 18249.255: 97.9418% ( 1) 00:07:50.061 18249.255 - 18350.080: 98.0172% ( 7) 00:07:50.061 18350.080 - 18450.905: 98.0819% ( 6) 00:07:50.061 18450.905 - 18551.729: 98.1466% ( 6) 00:07:50.061 18551.729 - 18652.554: 98.2112% ( 6) 00:07:50.061 18652.554 - 18753.378: 98.2759% ( 6) 00:07:50.061 18753.378 - 18854.203: 98.3405% ( 6) 00:07:50.061 18854.203 - 18955.028: 98.4159% ( 7) 00:07:50.061 18955.028 - 19055.852: 98.4806% ( 6) 00:07:50.061 19055.852 - 19156.677: 98.5453% ( 6) 00:07:50.061 19156.677 - 19257.502: 98.6099% ( 6) 00:07:50.061 19257.502 - 19358.326: 98.6207% ( 1) 00:07:50.061 22383.065 - 22483.889: 98.6638% ( 4) 00:07:50.061 22483.889 - 22584.714: 98.7069% ( 4) 00:07:50.061 22584.714 - 22685.538: 98.7500% ( 4) 00:07:50.062 22685.538 - 22786.363: 98.7931% ( 4) 00:07:50.062 22786.363 - 22887.188: 98.8362% ( 4) 00:07:50.062 22887.188 - 22988.012: 98.8793% ( 4) 00:07:50.062 22988.012 - 23088.837: 98.9224% ( 4) 00:07:50.062 23088.837 - 23189.662: 98.9655% ( 4) 00:07:50.062 23189.662 - 23290.486: 99.0086% ( 4) 00:07:50.062 23290.486 - 23391.311: 99.0517% ( 4) 00:07:50.062 23391.311 - 23492.135: 99.1056% ( 5) 00:07:50.062 23492.135 - 23592.960: 99.1487% ( 4) 00:07:50.062 23592.960 - 23693.785: 99.1918% ( 4) 00:07:50.062 23693.785 - 23794.609: 99.2349% ( 4) 00:07:50.062 23794.609 - 23895.434: 99.2780% ( 4) 00:07:50.062 23895.434 - 23996.258: 99.3103% ( 3) 00:07:50.062 30449.034 - 30650.683: 99.3858% ( 7) 00:07:50.062 30650.683 - 30852.332: 99.4612% ( 7) 00:07:50.062 30852.332 - 31053.982: 99.5474% ( 8) 00:07:50.062 31053.982 - 31255.631: 99.6336% ( 8) 00:07:50.062 31255.631 - 31457.280: 99.7091% ( 7) 00:07:50.062 31457.280 - 31658.929: 99.7953% ( 8) 00:07:50.062 31658.929 - 31860.578: 99.8815% ( 8) 00:07:50.062 31860.578 - 32062.228: 99.9677% ( 8) 00:07:50.062 32062.228 - 32263.877: 100.0000% ( 3) 00:07:50.062 00:07:50.062 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.062 ============================================================================== 00:07:50.062 Range in us Cumulative IO count 00:07:50.062 6301.538 - 6326.745: 0.0108% ( 1) 00:07:50.062 6402.363 - 6427.569: 0.0216% ( 1) 00:07:50.062 6553.600 - 6604.012: 0.0323% ( 1) 00:07:50.062 6604.012 - 6654.425: 0.1078% ( 7) 00:07:50.062 6654.425 - 6704.837: 0.2586% ( 14) 00:07:50.062 6704.837 - 6755.249: 0.4095% ( 14) 00:07:50.062 6755.249 - 6805.662: 0.7112% ( 28) 00:07:50.062 6805.662 - 6856.074: 0.8621% ( 14) 00:07:50.062 6856.074 - 6906.486: 1.0776% ( 20) 00:07:50.062 6906.486 - 6956.898: 1.1853% ( 10) 00:07:50.062 6956.898 - 7007.311: 1.2823% ( 9) 00:07:50.062 7007.311 - 7057.723: 1.3254% ( 4) 00:07:50.062 7057.723 - 7108.135: 1.3578% ( 3) 00:07:50.062 7108.135 - 7158.548: 1.3793% ( 2) 00:07:50.062 7208.960 - 7259.372: 1.4224% ( 4) 00:07:50.062 7259.372 - 7309.785: 1.4978% ( 7) 00:07:50.062 7309.785 - 7360.197: 1.5841% ( 8) 00:07:50.062 7360.197 - 7410.609: 1.8642% ( 26) 00:07:50.062 7410.609 - 7461.022: 1.9289% ( 6) 00:07:50.062 7461.022 - 7511.434: 2.0043% ( 7) 00:07:50.062 7511.434 - 7561.846: 2.0366% ( 3) 00:07:50.062 7561.846 - 7612.258: 2.0582% ( 2) 00:07:50.062 7612.258 - 7662.671: 2.0690% ( 1) 00:07:50.062 7713.083 - 7763.495: 2.1013% ( 3) 00:07:50.062 7763.495 - 7813.908: 2.1444% ( 4) 00:07:50.062 7813.908 - 7864.320: 2.1875% ( 4) 00:07:50.062 7864.320 - 7914.732: 2.2306% ( 4) 00:07:50.062 7914.732 - 7965.145: 2.2845% ( 5) 00:07:50.062 7965.145 - 8015.557: 2.3815% ( 9) 00:07:50.062 8015.557 - 8065.969: 2.4892% ( 10) 00:07:50.062 8065.969 - 8116.382: 2.5862% ( 9) 00:07:50.062 8116.382 - 8166.794: 2.7155% ( 12) 00:07:50.062 8166.794 - 8217.206: 2.9849% ( 25) 00:07:50.062 8217.206 - 8267.618: 3.1573% ( 16) 00:07:50.062 8267.618 - 8318.031: 3.3297% ( 16) 00:07:50.062 8318.031 - 8368.443: 3.5668% ( 22) 00:07:50.062 8368.443 - 8418.855: 3.7716% ( 19) 00:07:50.062 8418.855 - 8469.268: 3.9009% ( 12) 00:07:50.062 8469.268 - 8519.680: 4.0194% ( 11) 00:07:50.062 8519.680 - 8570.092: 4.1810% ( 15) 00:07:50.062 8570.092 - 8620.505: 4.3319% ( 14) 00:07:50.062 8620.505 - 8670.917: 4.4397% ( 10) 00:07:50.062 8670.917 - 8721.329: 4.5582% ( 11) 00:07:50.062 8721.329 - 8771.742: 4.6336% ( 7) 00:07:50.062 8771.742 - 8822.154: 4.6983% ( 6) 00:07:50.062 8822.154 - 8872.566: 4.7845% ( 8) 00:07:50.062 8872.566 - 8922.978: 4.8276% ( 4) 00:07:50.062 9124.628 - 9175.040: 4.9138% ( 8) 00:07:50.062 9175.040 - 9225.452: 4.9677% ( 5) 00:07:50.062 9225.452 - 9275.865: 5.0431% ( 7) 00:07:50.062 9275.865 - 9326.277: 5.1185% ( 7) 00:07:50.062 9326.277 - 9376.689: 5.1940% ( 7) 00:07:50.062 9376.689 - 9427.102: 5.2371% ( 4) 00:07:50.062 9427.102 - 9477.514: 5.3556% ( 11) 00:07:50.062 9477.514 - 9527.926: 5.4418% ( 8) 00:07:50.062 9527.926 - 9578.338: 5.5065% ( 6) 00:07:50.062 9578.338 - 9628.751: 5.8513% ( 32) 00:07:50.062 9628.751 - 9679.163: 6.0129% ( 15) 00:07:50.062 9679.163 - 9729.575: 6.1746% ( 15) 00:07:50.062 9729.575 - 9779.988: 6.3578% ( 17) 00:07:50.062 9779.988 - 9830.400: 6.5302% ( 16) 00:07:50.062 9830.400 - 9880.812: 6.7888% ( 24) 00:07:50.062 9880.812 - 9931.225: 7.0366% ( 23) 00:07:50.062 9931.225 - 9981.637: 7.1013% ( 6) 00:07:50.062 9981.637 - 10032.049: 7.1659% ( 6) 00:07:50.062 10032.049 - 10082.462: 7.2091% ( 4) 00:07:50.062 10082.462 - 10132.874: 7.3168% ( 10) 00:07:50.062 10132.874 - 10183.286: 7.4138% ( 9) 00:07:50.062 10183.286 - 10233.698: 7.5431% ( 12) 00:07:50.062 10233.698 - 10284.111: 7.6509% ( 10) 00:07:50.062 10284.111 - 10334.523: 7.8772% ( 21) 00:07:50.062 10334.523 - 10384.935: 8.0711% ( 18) 00:07:50.062 10384.935 - 10435.348: 8.2112% ( 13) 00:07:50.062 10435.348 - 10485.760: 8.3728% ( 15) 00:07:50.062 10485.760 - 10536.172: 8.6207% ( 23) 00:07:50.062 10536.172 - 10586.585: 8.9332% ( 29) 00:07:50.062 10586.585 - 10636.997: 9.1487% ( 20) 00:07:50.062 10636.997 - 10687.409: 9.3534% ( 19) 00:07:50.062 10687.409 - 10737.822: 9.6659% ( 29) 00:07:50.062 10737.822 - 10788.234: 10.0108% ( 32) 00:07:50.062 10788.234 - 10838.646: 10.2478% ( 22) 00:07:50.062 10838.646 - 10889.058: 10.4526% ( 19) 00:07:50.062 10889.058 - 10939.471: 10.6681% ( 20) 00:07:50.062 10939.471 - 10989.883: 10.9159% ( 23) 00:07:50.062 10989.883 - 11040.295: 11.3901% ( 44) 00:07:50.062 11040.295 - 11090.708: 11.8211% ( 40) 00:07:50.062 11090.708 - 11141.120: 12.2198% ( 37) 00:07:50.062 11141.120 - 11191.532: 12.7047% ( 45) 00:07:50.062 11191.532 - 11241.945: 13.3190% ( 57) 00:07:50.062 11241.945 - 11292.357: 13.9116% ( 55) 00:07:50.062 11292.357 - 11342.769: 14.4612% ( 51) 00:07:50.062 11342.769 - 11393.182: 14.9353% ( 44) 00:07:50.062 11393.182 - 11443.594: 15.5172% ( 54) 00:07:50.062 11443.594 - 11494.006: 16.1530% ( 59) 00:07:50.062 11494.006 - 11544.418: 16.7026% ( 51) 00:07:50.062 11544.418 - 11594.831: 17.3491% ( 60) 00:07:50.062 11594.831 - 11645.243: 18.0711% ( 67) 00:07:50.062 11645.243 - 11695.655: 18.6530% ( 54) 00:07:50.062 11695.655 - 11746.068: 19.2026% ( 51) 00:07:50.062 11746.068 - 11796.480: 20.0000% ( 74) 00:07:50.062 11796.480 - 11846.892: 20.5927% ( 55) 00:07:50.062 11846.892 - 11897.305: 21.0560% ( 43) 00:07:50.062 11897.305 - 11947.717: 21.4655% ( 38) 00:07:50.062 11947.717 - 11998.129: 21.9289% ( 43) 00:07:50.062 11998.129 - 12048.542: 22.3599% ( 40) 00:07:50.062 12048.542 - 12098.954: 22.8448% ( 45) 00:07:50.062 12098.954 - 12149.366: 23.3621% ( 48) 00:07:50.062 12149.366 - 12199.778: 23.8254% ( 43) 00:07:50.062 12199.778 - 12250.191: 24.4397% ( 57) 00:07:50.062 12250.191 - 12300.603: 25.0216% ( 54) 00:07:50.062 12300.603 - 12351.015: 25.5496% ( 49) 00:07:50.062 12351.015 - 12401.428: 26.2392% ( 64) 00:07:50.062 12401.428 - 12451.840: 26.8427% ( 56) 00:07:50.062 12451.840 - 12502.252: 27.8448% ( 93) 00:07:50.062 12502.252 - 12552.665: 28.3944% ( 51) 00:07:50.062 12552.665 - 12603.077: 28.9440% ( 51) 00:07:50.062 12603.077 - 12653.489: 29.5690% ( 58) 00:07:50.062 12653.489 - 12703.902: 30.3125% ( 69) 00:07:50.062 12703.902 - 12754.314: 31.1638% ( 79) 00:07:50.062 12754.314 - 12804.726: 31.9935% ( 77) 00:07:50.062 12804.726 - 12855.138: 32.7694% ( 72) 00:07:50.062 12855.138 - 12905.551: 33.7392% ( 90) 00:07:50.062 12905.551 - 13006.375: 35.2478% ( 140) 00:07:50.062 13006.375 - 13107.200: 36.7241% ( 137) 00:07:50.062 13107.200 - 13208.025: 38.4483% ( 160) 00:07:50.062 13208.025 - 13308.849: 40.0000% ( 144) 00:07:50.062 13308.849 - 13409.674: 41.7780% ( 165) 00:07:50.062 13409.674 - 13510.498: 43.6099% ( 170) 00:07:50.062 13510.498 - 13611.323: 45.2909% ( 156) 00:07:50.063 13611.323 - 13712.148: 46.8858% ( 148) 00:07:50.063 13712.148 - 13812.972: 48.4591% ( 146) 00:07:50.063 13812.972 - 13913.797: 50.1940% ( 161) 00:07:50.063 13913.797 - 14014.622: 52.0797% ( 175) 00:07:50.063 14014.622 - 14115.446: 54.1056% ( 188) 00:07:50.063 14115.446 - 14216.271: 55.6681% ( 145) 00:07:50.063 14216.271 - 14317.095: 57.3168% ( 153) 00:07:50.063 14317.095 - 14417.920: 58.9440% ( 151) 00:07:50.063 14417.920 - 14518.745: 60.6573% ( 159) 00:07:50.063 14518.745 - 14619.569: 62.3384% ( 156) 00:07:50.063 14619.569 - 14720.394: 64.1703% ( 170) 00:07:50.063 14720.394 - 14821.218: 65.9267% ( 163) 00:07:50.063 14821.218 - 14922.043: 67.5539% ( 151) 00:07:50.063 14922.043 - 15022.868: 69.1595% ( 149) 00:07:50.063 15022.868 - 15123.692: 70.6789% ( 141) 00:07:50.063 15123.692 - 15224.517: 72.1552% ( 137) 00:07:50.063 15224.517 - 15325.342: 73.6099% ( 135) 00:07:50.063 15325.342 - 15426.166: 75.1616% ( 144) 00:07:50.063 15426.166 - 15526.991: 76.8750% ( 159) 00:07:50.063 15526.991 - 15627.815: 78.7392% ( 173) 00:07:50.063 15627.815 - 15728.640: 80.8297% ( 194) 00:07:50.063 15728.640 - 15829.465: 83.1897% ( 219) 00:07:50.063 15829.465 - 15930.289: 84.8815% ( 157) 00:07:50.063 15930.289 - 16031.114: 86.5409% ( 154) 00:07:50.063 16031.114 - 16131.938: 88.2328% ( 157) 00:07:50.063 16131.938 - 16232.763: 89.3211% ( 101) 00:07:50.063 16232.763 - 16333.588: 90.2263% ( 84) 00:07:50.063 16333.588 - 16434.412: 91.1099% ( 82) 00:07:50.063 16434.412 - 16535.237: 91.7780% ( 62) 00:07:50.063 16535.237 - 16636.062: 92.2737% ( 46) 00:07:50.063 16636.062 - 16736.886: 92.7263% ( 42) 00:07:50.063 16736.886 - 16837.711: 93.1897% ( 43) 00:07:50.063 16837.711 - 16938.535: 93.5884% ( 37) 00:07:50.063 16938.535 - 17039.360: 93.9763% ( 36) 00:07:50.063 17039.360 - 17140.185: 94.4181% ( 41) 00:07:50.063 17140.185 - 17241.009: 95.1293% ( 66) 00:07:50.063 17241.009 - 17341.834: 95.7974% ( 62) 00:07:50.063 17341.834 - 17442.658: 96.2608% ( 43) 00:07:50.063 17442.658 - 17543.483: 96.6810% ( 39) 00:07:50.063 17543.483 - 17644.308: 96.9612% ( 26) 00:07:50.063 17644.308 - 17745.132: 97.1659% ( 19) 00:07:50.063 17745.132 - 17845.957: 97.4030% ( 22) 00:07:50.063 17845.957 - 17946.782: 97.5970% ( 18) 00:07:50.063 17946.782 - 18047.606: 97.7909% ( 18) 00:07:50.063 18047.606 - 18148.431: 97.8664% ( 7) 00:07:50.063 18148.431 - 18249.255: 97.9203% ( 5) 00:07:50.063 18249.255 - 18350.080: 97.9310% ( 1) 00:07:50.063 18450.905 - 18551.729: 97.9849% ( 5) 00:07:50.063 18551.729 - 18652.554: 98.0496% ( 6) 00:07:50.063 18652.554 - 18753.378: 98.1142% ( 6) 00:07:50.063 18753.378 - 18854.203: 98.1789% ( 6) 00:07:50.063 18854.203 - 18955.028: 98.2435% ( 6) 00:07:50.063 18955.028 - 19055.852: 98.3082% ( 6) 00:07:50.063 19055.852 - 19156.677: 98.3728% ( 6) 00:07:50.063 19156.677 - 19257.502: 98.4375% ( 6) 00:07:50.063 19257.502 - 19358.326: 98.5022% ( 6) 00:07:50.063 19358.326 - 19459.151: 98.5668% ( 6) 00:07:50.063 19459.151 - 19559.975: 98.6207% ( 5) 00:07:50.063 21979.766 - 22080.591: 98.6422% ( 2) 00:07:50.063 22080.591 - 22181.415: 98.6746% ( 3) 00:07:50.063 22181.415 - 22282.240: 98.7177% ( 4) 00:07:50.063 22282.240 - 22383.065: 98.7608% ( 4) 00:07:50.063 22383.065 - 22483.889: 98.7931% ( 3) 00:07:50.063 22483.889 - 22584.714: 98.8254% ( 3) 00:07:50.063 22584.714 - 22685.538: 98.8578% ( 3) 00:07:50.063 22685.538 - 22786.363: 98.8901% ( 3) 00:07:50.063 22786.363 - 22887.188: 98.9332% ( 4) 00:07:50.063 22887.188 - 22988.012: 98.9763% ( 4) 00:07:50.063 22988.012 - 23088.837: 99.0194% ( 4) 00:07:50.063 23088.837 - 23189.662: 99.0625% ( 4) 00:07:50.063 23189.662 - 23290.486: 99.1056% ( 4) 00:07:50.063 23290.486 - 23391.311: 99.1487% ( 4) 00:07:50.063 23391.311 - 23492.135: 99.1918% ( 4) 00:07:50.063 23492.135 - 23592.960: 99.2349% ( 4) 00:07:50.063 23592.960 - 23693.785: 99.2888% ( 5) 00:07:50.063 23693.785 - 23794.609: 99.3103% ( 2) 00:07:50.063 29642.437 - 29844.086: 99.3427% ( 3) 00:07:50.063 29844.086 - 30045.735: 99.4181% ( 7) 00:07:50.063 30045.735 - 30247.385: 99.4935% ( 7) 00:07:50.063 30247.385 - 30449.034: 99.5797% ( 8) 00:07:50.063 30449.034 - 30650.683: 99.6659% ( 8) 00:07:50.063 30650.683 - 30852.332: 99.7522% ( 8) 00:07:50.063 30852.332 - 31053.982: 99.8384% ( 8) 00:07:50.063 31053.982 - 31255.631: 99.9353% ( 9) 00:07:50.063 31255.631 - 31457.280: 100.0000% ( 6) 00:07:50.063 00:07:50.063 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.063 ============================================================================== 00:07:50.063 Range in us Cumulative IO count 00:07:50.063 6402.363 - 6427.569: 0.0107% ( 1) 00:07:50.063 6427.569 - 6452.775: 0.0214% ( 1) 00:07:50.063 6553.600 - 6604.012: 0.0749% ( 5) 00:07:50.063 6604.012 - 6654.425: 0.1391% ( 6) 00:07:50.063 6654.425 - 6704.837: 0.3211% ( 17) 00:07:50.063 6704.837 - 6755.249: 0.6635% ( 32) 00:07:50.063 6755.249 - 6805.662: 0.8455% ( 17) 00:07:50.063 6805.662 - 6856.074: 1.0809% ( 22) 00:07:50.063 6856.074 - 6906.486: 1.1772% ( 9) 00:07:50.063 6906.486 - 6956.898: 1.2628% ( 8) 00:07:50.063 6956.898 - 7007.311: 1.3057% ( 4) 00:07:50.063 7007.311 - 7057.723: 1.3485% ( 4) 00:07:50.063 7057.723 - 7108.135: 1.3699% ( 2) 00:07:50.063 7410.609 - 7461.022: 1.3806% ( 1) 00:07:50.063 7511.434 - 7561.846: 1.4769% ( 9) 00:07:50.063 7561.846 - 7612.258: 1.5732% ( 9) 00:07:50.063 7612.258 - 7662.671: 1.9050% ( 31) 00:07:50.063 7662.671 - 7713.083: 1.9692% ( 6) 00:07:50.063 7713.083 - 7763.495: 2.0013% ( 3) 00:07:50.063 7763.495 - 7813.908: 2.0762% ( 7) 00:07:50.063 7813.908 - 7864.320: 2.1404% ( 6) 00:07:50.063 7864.320 - 7914.732: 2.2046% ( 6) 00:07:50.063 7914.732 - 7965.145: 2.2688% ( 6) 00:07:50.063 7965.145 - 8015.557: 2.4187% ( 14) 00:07:50.063 8015.557 - 8065.969: 2.5899% ( 16) 00:07:50.063 8065.969 - 8116.382: 2.7290% ( 13) 00:07:50.063 8116.382 - 8166.794: 2.9110% ( 17) 00:07:50.063 8166.794 - 8217.206: 3.2962% ( 36) 00:07:50.063 8217.206 - 8267.618: 3.5424% ( 23) 00:07:50.063 8267.618 - 8318.031: 3.6708% ( 12) 00:07:50.063 8318.031 - 8368.443: 3.8634% ( 18) 00:07:50.063 8368.443 - 8418.855: 4.0454% ( 17) 00:07:50.063 8418.855 - 8469.268: 4.2059% ( 15) 00:07:50.063 8469.268 - 8519.680: 4.5163% ( 29) 00:07:50.063 8519.680 - 8570.092: 4.6340% ( 11) 00:07:50.063 8570.092 - 8620.505: 4.7196% ( 8) 00:07:50.063 8620.505 - 8670.917: 4.8266% ( 10) 00:07:50.063 8670.917 - 8721.329: 4.9229% ( 9) 00:07:50.063 8721.329 - 8771.742: 4.9979% ( 7) 00:07:50.063 8771.742 - 8822.154: 5.0514% ( 5) 00:07:50.063 8822.154 - 8872.566: 5.1049% ( 5) 00:07:50.063 8872.566 - 8922.978: 5.1691% ( 6) 00:07:50.063 8922.978 - 8973.391: 5.3831% ( 20) 00:07:50.063 8973.391 - 9023.803: 5.5437% ( 15) 00:07:50.063 9023.803 - 9074.215: 5.7042% ( 15) 00:07:50.063 9074.215 - 9124.628: 5.8861% ( 17) 00:07:50.063 9124.628 - 9175.040: 6.0574% ( 16) 00:07:50.063 9175.040 - 9225.452: 6.1644% ( 10) 00:07:50.063 9225.452 - 9275.865: 6.2928% ( 12) 00:07:50.063 9275.865 - 9326.277: 6.3891% ( 9) 00:07:50.063 9326.277 - 9376.689: 6.4747% ( 8) 00:07:50.063 9376.689 - 9427.102: 6.5176% ( 4) 00:07:50.063 9427.102 - 9477.514: 6.5711% ( 5) 00:07:50.063 9477.514 - 9527.926: 6.6032% ( 3) 00:07:50.063 9527.926 - 9578.338: 6.6567% ( 5) 00:07:50.063 9578.338 - 9628.751: 6.7102% ( 5) 00:07:50.063 9628.751 - 9679.163: 6.7530% ( 4) 00:07:50.063 9679.163 - 9729.575: 6.8279% ( 7) 00:07:50.063 9729.575 - 9779.988: 6.9349% ( 10) 00:07:50.063 9779.988 - 9830.400: 6.9991% ( 6) 00:07:50.063 9830.400 - 9880.812: 7.0312% ( 3) 00:07:50.063 9880.812 - 9931.225: 7.1062% ( 7) 00:07:50.063 9931.225 - 9981.637: 7.3095% ( 19) 00:07:50.063 9981.637 - 10032.049: 7.3523% ( 4) 00:07:50.063 10032.049 - 10082.462: 7.3951% ( 4) 00:07:50.063 10082.462 - 10132.874: 7.4272% ( 3) 00:07:50.063 10132.874 - 10183.286: 7.4807% ( 5) 00:07:50.063 10183.286 - 10233.698: 7.5021% ( 2) 00:07:50.063 10233.698 - 10284.111: 7.5449% ( 4) 00:07:50.063 10284.111 - 10334.523: 7.5878% ( 4) 00:07:50.063 10334.523 - 10384.935: 7.6413% ( 5) 00:07:50.063 10384.935 - 10435.348: 7.7055% ( 6) 00:07:50.063 10435.348 - 10485.760: 7.7804% ( 7) 00:07:50.063 10485.760 - 10536.172: 8.0586% ( 26) 00:07:50.063 10536.172 - 10586.585: 8.2513% ( 18) 00:07:50.063 10586.585 - 10636.997: 8.3797% ( 12) 00:07:50.063 10636.997 - 10687.409: 8.5616% ( 17) 00:07:50.063 10687.409 - 10737.822: 8.7222% ( 15) 00:07:50.063 10737.822 - 10788.234: 9.0004% ( 26) 00:07:50.063 10788.234 - 10838.646: 9.5034% ( 47) 00:07:50.063 10838.646 - 10889.058: 9.7817% ( 26) 00:07:50.063 10889.058 - 10939.471: 9.9957% ( 20) 00:07:50.063 10939.471 - 10989.883: 10.3703% ( 35) 00:07:50.063 10989.883 - 11040.295: 11.0017% ( 59) 00:07:50.063 11040.295 - 11090.708: 11.3656% ( 34) 00:07:50.063 11090.708 - 11141.120: 11.6973% ( 31) 00:07:50.063 11141.120 - 11191.532: 12.0826% ( 36) 00:07:50.063 11191.532 - 11241.945: 12.6070% ( 49) 00:07:50.063 11241.945 - 11292.357: 12.8960% ( 27) 00:07:50.063 11292.357 - 11342.769: 13.2491% ( 33) 00:07:50.063 11342.769 - 11393.182: 13.6879% ( 41) 00:07:50.063 11393.182 - 11443.594: 14.1374% ( 42) 00:07:50.063 11443.594 - 11494.006: 14.6939% ( 52) 00:07:50.064 11494.006 - 11544.418: 15.3682% ( 63) 00:07:50.064 11544.418 - 11594.831: 15.8926% ( 49) 00:07:50.064 11594.831 - 11645.243: 16.8557% ( 90) 00:07:50.064 11645.243 - 11695.655: 17.5728% ( 67) 00:07:50.064 11695.655 - 11746.068: 18.1400% ( 53) 00:07:50.064 11746.068 - 11796.480: 18.8677% ( 68) 00:07:50.064 11796.480 - 11846.892: 19.7667% ( 84) 00:07:50.064 11846.892 - 11897.305: 20.6443% ( 82) 00:07:50.064 11897.305 - 11947.717: 21.4576% ( 76) 00:07:50.064 11947.717 - 11998.129: 22.3031% ( 79) 00:07:50.064 11998.129 - 12048.542: 22.9880% ( 64) 00:07:50.064 12048.542 - 12098.954: 23.7479% ( 71) 00:07:50.064 12098.954 - 12149.366: 24.5933% ( 79) 00:07:50.064 12149.366 - 12199.778: 25.0749% ( 45) 00:07:50.064 12199.778 - 12250.191: 25.6849% ( 57) 00:07:50.064 12250.191 - 12300.603: 26.5411% ( 80) 00:07:50.064 12300.603 - 12351.015: 27.1725% ( 59) 00:07:50.064 12351.015 - 12401.428: 27.8146% ( 60) 00:07:50.064 12401.428 - 12451.840: 28.4889% ( 63) 00:07:50.064 12451.840 - 12502.252: 29.3878% ( 84) 00:07:50.064 12502.252 - 12552.665: 30.0514% ( 62) 00:07:50.064 12552.665 - 12603.077: 30.8968% ( 79) 00:07:50.064 12603.077 - 12653.489: 31.5497% ( 61) 00:07:50.064 12653.489 - 12703.902: 32.1918% ( 60) 00:07:50.064 12703.902 - 12754.314: 32.7055% ( 48) 00:07:50.064 12754.314 - 12804.726: 33.5509% ( 79) 00:07:50.064 12804.726 - 12855.138: 34.0860% ( 50) 00:07:50.064 12855.138 - 12905.551: 34.5141% ( 40) 00:07:50.064 12905.551 - 13006.375: 35.4131% ( 84) 00:07:50.064 13006.375 - 13107.200: 36.5796% ( 109) 00:07:50.064 13107.200 - 13208.025: 38.4953% ( 179) 00:07:50.064 13208.025 - 13308.849: 40.3146% ( 170) 00:07:50.064 13308.849 - 13409.674: 41.6096% ( 121) 00:07:50.064 13409.674 - 13510.498: 43.0223% ( 132) 00:07:50.064 13510.498 - 13611.323: 44.7346% ( 160) 00:07:50.064 13611.323 - 13712.148: 46.5218% ( 167) 00:07:50.064 13712.148 - 13812.972: 48.0950% ( 147) 00:07:50.064 13812.972 - 13913.797: 49.6896% ( 149) 00:07:50.064 13913.797 - 14014.622: 51.6374% ( 182) 00:07:50.064 14014.622 - 14115.446: 53.6066% ( 184) 00:07:50.064 14115.446 - 14216.271: 55.7791% ( 203) 00:07:50.064 14216.271 - 14317.095: 58.0051% ( 208) 00:07:50.064 14317.095 - 14417.920: 60.1777% ( 203) 00:07:50.064 14417.920 - 14518.745: 61.9328% ( 164) 00:07:50.064 14518.745 - 14619.569: 63.9876% ( 192) 00:07:50.064 14619.569 - 14720.394: 65.6250% ( 153) 00:07:50.064 14720.394 - 14821.218: 67.3373% ( 160) 00:07:50.064 14821.218 - 14922.043: 68.6537% ( 123) 00:07:50.064 14922.043 - 15022.868: 70.0771% ( 133) 00:07:50.064 15022.868 - 15123.692: 71.7038% ( 152) 00:07:50.064 15123.692 - 15224.517: 72.9559% ( 117) 00:07:50.064 15224.517 - 15325.342: 74.5612% ( 150) 00:07:50.064 15325.342 - 15426.166: 76.2307% ( 156) 00:07:50.064 15426.166 - 15526.991: 77.9324% ( 159) 00:07:50.064 15526.991 - 15627.815: 79.3985% ( 137) 00:07:50.064 15627.815 - 15728.640: 81.1109% ( 160) 00:07:50.064 15728.640 - 15829.465: 82.6520% ( 144) 00:07:50.064 15829.465 - 15930.289: 84.2359% ( 148) 00:07:50.064 15930.289 - 16031.114: 85.7235% ( 139) 00:07:50.064 16031.114 - 16131.938: 87.1147% ( 130) 00:07:50.064 16131.938 - 16232.763: 88.1742% ( 99) 00:07:50.064 16232.763 - 16333.588: 88.9876% ( 76) 00:07:50.064 16333.588 - 16434.412: 89.9187% ( 87) 00:07:50.064 16434.412 - 16535.237: 90.7748% ( 80) 00:07:50.064 16535.237 - 16636.062: 91.7594% ( 92) 00:07:50.064 16636.062 - 16736.886: 92.6584% ( 84) 00:07:50.064 16736.886 - 16837.711: 93.5788% ( 86) 00:07:50.064 16837.711 - 16938.535: 94.4777% ( 84) 00:07:50.064 16938.535 - 17039.360: 95.3232% ( 79) 00:07:50.064 17039.360 - 17140.185: 95.8262% ( 47) 00:07:50.064 17140.185 - 17241.009: 96.3827% ( 52) 00:07:50.064 17241.009 - 17341.834: 96.8322% ( 42) 00:07:50.064 17341.834 - 17442.658: 97.2603% ( 40) 00:07:50.064 17442.658 - 17543.483: 97.5492% ( 27) 00:07:50.064 17543.483 - 17644.308: 97.7526% ( 19) 00:07:50.064 17644.308 - 17745.132: 97.8917% ( 13) 00:07:50.064 17745.132 - 17845.957: 98.0415% ( 14) 00:07:50.064 17845.957 - 17946.782: 98.2235% ( 17) 00:07:50.064 17946.782 - 18047.606: 98.3840% ( 15) 00:07:50.064 18047.606 - 18148.431: 98.4910% ( 10) 00:07:50.064 18148.431 - 18249.255: 98.6408% ( 14) 00:07:50.064 18249.255 - 18350.080: 98.7907% ( 14) 00:07:50.064 18350.080 - 18450.905: 98.8442% ( 5) 00:07:50.064 18450.905 - 18551.729: 98.9084% ( 6) 00:07:50.064 18551.729 - 18652.554: 98.9726% ( 6) 00:07:50.064 18652.554 - 18753.378: 99.0261% ( 5) 00:07:50.064 18753.378 - 18854.203: 99.0903% ( 6) 00:07:50.064 18854.203 - 18955.028: 99.1545% ( 6) 00:07:50.064 18955.028 - 19055.852: 99.2080% ( 5) 00:07:50.064 19055.852 - 19156.677: 99.2616% ( 5) 00:07:50.064 19156.677 - 19257.502: 99.3151% ( 5) 00:07:50.064 21979.766 - 22080.591: 99.3365% ( 2) 00:07:50.064 22080.591 - 22181.415: 99.3793% ( 4) 00:07:50.064 22181.415 - 22282.240: 99.4114% ( 3) 00:07:50.064 22282.240 - 22383.065: 99.4649% ( 5) 00:07:50.064 22383.065 - 22483.889: 99.4970% ( 3) 00:07:50.064 22483.889 - 22584.714: 99.5398% ( 4) 00:07:50.064 22584.714 - 22685.538: 99.5826% ( 4) 00:07:50.064 22685.538 - 22786.363: 99.6254% ( 4) 00:07:50.064 22786.363 - 22887.188: 99.6682% ( 4) 00:07:50.064 22887.188 - 22988.012: 99.7110% ( 4) 00:07:50.064 22988.012 - 23088.837: 99.7539% ( 4) 00:07:50.064 23088.837 - 23189.662: 99.7967% ( 4) 00:07:50.064 23189.662 - 23290.486: 99.8395% ( 4) 00:07:50.064 23290.486 - 23391.311: 99.8823% ( 4) 00:07:50.064 23391.311 - 23492.135: 99.9144% ( 3) 00:07:50.064 23492.135 - 23592.960: 99.9679% ( 5) 00:07:50.064 23592.960 - 23693.785: 100.0000% ( 3) 00:07:50.064 00:07:50.064 02:52:20 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:50.064 00:07:50.064 real 0m2.560s 00:07:50.064 user 0m2.220s 00:07:50.064 sys 0m0.204s 00:07:50.064 02:52:20 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.064 02:52:20 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.064 ************************************ 00:07:50.064 END TEST nvme_perf 00:07:50.064 ************************************ 00:07:50.064 02:52:20 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:50.064 02:52:20 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:50.064 02:52:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.064 02:52:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.064 ************************************ 00:07:50.064 START TEST nvme_hello_world 00:07:50.064 ************************************ 00:07:50.064 02:52:20 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:50.064 Initializing NVMe Controllers 00:07:50.064 Attached to 0000:00:11.0 00:07:50.064 Namespace ID: 1 size: 5GB 00:07:50.064 Attached to 0000:00:13.0 00:07:50.064 Namespace ID: 1 size: 1GB 00:07:50.064 Attached to 0000:00:10.0 00:07:50.064 Namespace ID: 1 size: 6GB 00:07:50.064 Attached to 0000:00:12.0 00:07:50.064 Namespace ID: 1 size: 4GB 00:07:50.064 Namespace ID: 2 size: 4GB 00:07:50.064 Namespace ID: 3 size: 4GB 00:07:50.064 Initialization complete. 00:07:50.064 INFO: using host memory buffer for IO 00:07:50.064 Hello world! 00:07:50.064 INFO: using host memory buffer for IO 00:07:50.064 Hello world! 00:07:50.064 INFO: using host memory buffer for IO 00:07:50.064 Hello world! 00:07:50.064 INFO: using host memory buffer for IO 00:07:50.064 Hello world! 00:07:50.064 INFO: using host memory buffer for IO 00:07:50.064 Hello world! 00:07:50.064 INFO: using host memory buffer for IO 00:07:50.064 Hello world! 00:07:50.064 00:07:50.064 real 0m0.227s 00:07:50.064 user 0m0.085s 00:07:50.064 sys 0m0.099s 00:07:50.064 02:52:20 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.064 02:52:20 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:50.064 ************************************ 00:07:50.064 END TEST nvme_hello_world 00:07:50.064 ************************************ 00:07:50.064 02:52:20 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:50.064 02:52:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.064 02:52:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.064 02:52:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.064 ************************************ 00:07:50.064 START TEST nvme_sgl 00:07:50.064 ************************************ 00:07:50.064 02:52:20 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:50.323 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:50.323 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:50.323 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:50.323 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:50.323 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:50.323 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:50.323 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:50.323 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:50.323 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:50.323 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:50.323 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:50.323 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:50.580 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:50.580 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:50.580 NVMe Readv/Writev Request test 00:07:50.580 Attached to 0000:00:11.0 00:07:50.580 Attached to 0000:00:13.0 00:07:50.580 Attached to 0000:00:10.0 00:07:50.580 Attached to 0000:00:12.0 00:07:50.580 0000:00:11.0: build_io_request_2 test passed 00:07:50.580 0000:00:11.0: build_io_request_4 test passed 00:07:50.580 0000:00:11.0: build_io_request_5 test passed 00:07:50.580 0000:00:11.0: build_io_request_6 test passed 00:07:50.580 0000:00:11.0: build_io_request_7 test passed 00:07:50.580 0000:00:11.0: build_io_request_10 test passed 00:07:50.580 0000:00:10.0: build_io_request_2 test passed 00:07:50.581 0000:00:10.0: build_io_request_4 test passed 00:07:50.581 0000:00:10.0: build_io_request_5 test passed 00:07:50.581 0000:00:10.0: build_io_request_6 test passed 00:07:50.581 0000:00:10.0: build_io_request_7 test passed 00:07:50.581 0000:00:10.0: build_io_request_10 test passed 00:07:50.581 Cleaning up... 00:07:50.581 00:07:50.581 real 0m0.312s 00:07:50.581 user 0m0.159s 00:07:50.581 sys 0m0.094s 00:07:50.581 ************************************ 00:07:50.581 END TEST nvme_sgl 00:07:50.581 ************************************ 00:07:50.581 02:52:21 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.581 02:52:21 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:50.581 02:52:21 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:50.581 02:52:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.581 02:52:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.581 02:52:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.581 ************************************ 00:07:50.581 START TEST nvme_e2edp 00:07:50.581 ************************************ 00:07:50.581 02:52:21 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:50.839 NVMe Write/Read with End-to-End data protection test 00:07:50.839 Attached to 0000:00:11.0 00:07:50.839 Attached to 0000:00:13.0 00:07:50.839 Attached to 0000:00:10.0 00:07:50.839 Attached to 0000:00:12.0 00:07:50.839 Cleaning up... 00:07:50.839 00:07:50.839 real 0m0.210s 00:07:50.839 user 0m0.068s 00:07:50.839 sys 0m0.099s 00:07:50.839 02:52:21 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.839 ************************************ 00:07:50.839 END TEST nvme_e2edp 00:07:50.839 ************************************ 00:07:50.839 02:52:21 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:50.839 02:52:21 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:50.839 02:52:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.839 02:52:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.839 02:52:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.839 ************************************ 00:07:50.839 START TEST nvme_reserve 00:07:50.839 ************************************ 00:07:50.839 02:52:21 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:51.096 ===================================================== 00:07:51.096 NVMe Controller at PCI bus 0, device 17, function 0 00:07:51.096 ===================================================== 00:07:51.096 Reservations: Not Supported 00:07:51.096 ===================================================== 00:07:51.096 NVMe Controller at PCI bus 0, device 19, function 0 00:07:51.096 ===================================================== 00:07:51.097 Reservations: Not Supported 00:07:51.097 ===================================================== 00:07:51.097 NVMe Controller at PCI bus 0, device 16, function 0 00:07:51.097 ===================================================== 00:07:51.097 Reservations: Not Supported 00:07:51.097 ===================================================== 00:07:51.097 NVMe Controller at PCI bus 0, device 18, function 0 00:07:51.097 ===================================================== 00:07:51.097 Reservations: Not Supported 00:07:51.097 Reservation test passed 00:07:51.097 00:07:51.097 real 0m0.224s 00:07:51.097 user 0m0.083s 00:07:51.097 sys 0m0.094s 00:07:51.097 02:52:21 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.097 ************************************ 00:07:51.097 END TEST nvme_reserve 00:07:51.097 ************************************ 00:07:51.097 02:52:21 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:51.097 02:52:21 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:51.097 02:52:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.097 02:52:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.097 02:52:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.097 ************************************ 00:07:51.097 START TEST nvme_err_injection 00:07:51.097 ************************************ 00:07:51.097 02:52:21 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:51.355 NVMe Error Injection test 00:07:51.355 Attached to 0000:00:11.0 00:07:51.355 Attached to 0000:00:13.0 00:07:51.355 Attached to 0000:00:10.0 00:07:51.355 Attached to 0000:00:12.0 00:07:51.355 0000:00:10.0: get features failed as expected 00:07:51.355 0000:00:12.0: get features failed as expected 00:07:51.355 0000:00:11.0: get features failed as expected 00:07:51.355 0000:00:13.0: get features failed as expected 00:07:51.355 0000:00:12.0: get features successfully as expected 00:07:51.355 0000:00:11.0: get features successfully as expected 00:07:51.355 0000:00:13.0: get features successfully as expected 00:07:51.355 0000:00:10.0: get features successfully as expected 00:07:51.355 0000:00:12.0: read failed as expected 00:07:51.355 0000:00:11.0: read failed as expected 00:07:51.355 0000:00:13.0: read failed as expected 00:07:51.355 0000:00:10.0: read failed as expected 00:07:51.355 0000:00:13.0: read successfully as expected 00:07:51.355 0000:00:10.0: read successfully as expected 00:07:51.355 0000:00:12.0: read successfully as expected 00:07:51.355 0000:00:11.0: read successfully as expected 00:07:51.355 Cleaning up... 00:07:51.355 00:07:51.355 real 0m0.219s 00:07:51.355 user 0m0.075s 00:07:51.355 sys 0m0.100s 00:07:51.355 ************************************ 00:07:51.355 END TEST nvme_err_injection 00:07:51.355 ************************************ 00:07:51.355 02:52:22 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.355 02:52:22 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:51.355 02:52:22 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:51.355 02:52:22 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:51.355 02:52:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.355 02:52:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.355 ************************************ 00:07:51.355 START TEST nvme_overhead 00:07:51.355 ************************************ 00:07:51.355 02:52:22 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:52.792 Initializing NVMe Controllers 00:07:52.792 Attached to 0000:00:11.0 00:07:52.792 Attached to 0000:00:13.0 00:07:52.792 Attached to 0000:00:10.0 00:07:52.792 Attached to 0000:00:12.0 00:07:52.792 Initialization complete. Launching workers. 00:07:52.792 submit (in ns) avg, min, max = 11388.7, 10391.5, 416443.1 00:07:52.792 complete (in ns) avg, min, max = 7560.7, 7252.3, 74034.6 00:07:52.792 00:07:52.792 Submit histogram 00:07:52.792 ================ 00:07:52.792 Range in us Cumulative Count 00:07:52.792 10.388 - 10.437: 0.0058% ( 1) 00:07:52.792 10.782 - 10.831: 0.1455% ( 24) 00:07:52.792 10.831 - 10.880: 1.1464% ( 172) 00:07:52.792 10.880 - 10.929: 5.2374% ( 703) 00:07:52.792 10.929 - 10.978: 14.6997% ( 1626) 00:07:52.792 10.978 - 11.028: 28.1832% ( 2317) 00:07:52.792 11.028 - 11.077: 42.9004% ( 2529) 00:07:52.792 11.077 - 11.126: 55.3247% ( 2135) 00:07:52.792 11.126 - 11.175: 64.5892% ( 1592) 00:07:52.792 11.175 - 11.225: 70.7111% ( 1052) 00:07:52.792 11.225 - 11.274: 74.9593% ( 730) 00:07:52.792 11.274 - 11.323: 77.5954% ( 453) 00:07:52.792 11.323 - 11.372: 79.2947% ( 292) 00:07:52.792 11.372 - 11.422: 80.7146% ( 244) 00:07:52.792 11.422 - 11.471: 81.9018% ( 204) 00:07:52.792 11.471 - 11.520: 83.1122% ( 208) 00:07:52.792 11.520 - 11.569: 84.4332% ( 227) 00:07:52.792 11.569 - 11.618: 85.7600% ( 228) 00:07:52.792 11.618 - 11.668: 87.1741% ( 243) 00:07:52.792 11.668 - 11.717: 88.4893% ( 226) 00:07:52.792 11.717 - 11.766: 89.6648% ( 202) 00:07:52.792 11.766 - 11.815: 90.7356% ( 184) 00:07:52.792 11.815 - 11.865: 91.7481% ( 174) 00:07:52.792 11.865 - 11.914: 92.5745% ( 142) 00:07:52.792 11.914 - 11.963: 93.2728% ( 120) 00:07:52.792 11.963 - 12.012: 93.8897% ( 106) 00:07:52.792 12.012 - 12.062: 94.4018% ( 88) 00:07:52.792 12.062 - 12.111: 94.8731% ( 81) 00:07:52.792 12.111 - 12.160: 95.2921% ( 72) 00:07:52.792 12.160 - 12.209: 95.6355% ( 59) 00:07:52.792 12.209 - 12.258: 96.0312% ( 68) 00:07:52.792 12.258 - 12.308: 96.3163% ( 49) 00:07:52.792 12.308 - 12.357: 96.5957% ( 48) 00:07:52.792 12.357 - 12.406: 96.7993% ( 35) 00:07:52.792 12.406 - 12.455: 96.9390% ( 24) 00:07:52.792 12.455 - 12.505: 97.0554% ( 20) 00:07:52.792 12.505 - 12.554: 97.1485% ( 16) 00:07:52.792 12.554 - 12.603: 97.2183% ( 12) 00:07:52.792 12.603 - 12.702: 97.3173% ( 17) 00:07:52.792 12.702 - 12.800: 97.3696% ( 9) 00:07:52.792 12.800 - 12.898: 97.3929% ( 4) 00:07:52.792 12.898 - 12.997: 97.4569% ( 11) 00:07:52.792 12.997 - 13.095: 97.5559% ( 17) 00:07:52.792 13.095 - 13.194: 97.7014% ( 25) 00:07:52.792 13.194 - 13.292: 97.8468% ( 25) 00:07:52.792 13.292 - 13.391: 97.9516% ( 18) 00:07:52.792 13.391 - 13.489: 98.0272% ( 13) 00:07:52.792 13.489 - 13.588: 98.0622% ( 6) 00:07:52.792 13.588 - 13.686: 98.0796% ( 3) 00:07:52.792 13.686 - 13.785: 98.1378% ( 10) 00:07:52.792 13.785 - 13.883: 98.1553% ( 3) 00:07:52.792 13.883 - 13.982: 98.1727% ( 3) 00:07:52.792 13.982 - 14.080: 98.1960% ( 4) 00:07:52.792 14.080 - 14.178: 98.2135% ( 3) 00:07:52.792 14.178 - 14.277: 98.2309% ( 3) 00:07:52.792 14.277 - 14.375: 98.2484% ( 3) 00:07:52.792 14.375 - 14.474: 98.2600% ( 2) 00:07:52.792 14.572 - 14.671: 98.2949% ( 6) 00:07:52.792 14.671 - 14.769: 98.3182% ( 4) 00:07:52.792 14.769 - 14.868: 98.3473% ( 5) 00:07:52.792 14.868 - 14.966: 98.3648% ( 3) 00:07:52.792 14.966 - 15.065: 98.4055% ( 7) 00:07:52.792 15.065 - 15.163: 98.4288% ( 4) 00:07:52.792 15.163 - 15.262: 98.4462% ( 3) 00:07:52.792 15.262 - 15.360: 98.4520% ( 1) 00:07:52.792 15.360 - 15.458: 98.4637% ( 2) 00:07:52.792 15.458 - 15.557: 98.4753% ( 2) 00:07:52.792 15.557 - 15.655: 98.4811% ( 1) 00:07:52.793 15.655 - 15.754: 98.4870% ( 1) 00:07:52.793 15.754 - 15.852: 98.4928% ( 1) 00:07:52.793 15.852 - 15.951: 98.5161% ( 4) 00:07:52.793 15.951 - 16.049: 98.5219% ( 1) 00:07:52.793 16.049 - 16.148: 98.5335% ( 2) 00:07:52.793 16.148 - 16.246: 98.5393% ( 1) 00:07:52.793 16.345 - 16.443: 98.5568% ( 3) 00:07:52.793 16.443 - 16.542: 98.6092% ( 9) 00:07:52.793 16.542 - 16.640: 98.6906% ( 14) 00:07:52.793 16.640 - 16.738: 98.8128% ( 21) 00:07:52.793 16.738 - 16.837: 98.9060% ( 16) 00:07:52.793 16.837 - 16.935: 99.0282% ( 21) 00:07:52.793 16.935 - 17.034: 99.1271% ( 17) 00:07:52.793 17.034 - 17.132: 99.2435% ( 20) 00:07:52.793 17.132 - 17.231: 99.3075% ( 11) 00:07:52.793 17.231 - 17.329: 99.4006% ( 16) 00:07:52.793 17.329 - 17.428: 99.4588% ( 10) 00:07:52.793 17.428 - 17.526: 99.5170% ( 10) 00:07:52.793 17.526 - 17.625: 99.5577% ( 7) 00:07:52.793 17.625 - 17.723: 99.5926% ( 6) 00:07:52.793 17.723 - 17.822: 99.6276% ( 6) 00:07:52.793 17.822 - 17.920: 99.6450% ( 3) 00:07:52.793 17.920 - 18.018: 99.6683% ( 4) 00:07:52.793 18.018 - 18.117: 99.7090% ( 7) 00:07:52.793 18.117 - 18.215: 99.7323% ( 4) 00:07:52.793 18.314 - 18.412: 99.7381% ( 1) 00:07:52.793 18.905 - 19.003: 99.7439% ( 1) 00:07:52.793 19.102 - 19.200: 99.7614% ( 3) 00:07:52.793 19.397 - 19.495: 99.7730% ( 2) 00:07:52.793 19.594 - 19.692: 99.7847% ( 2) 00:07:52.793 19.692 - 19.791: 99.8021% ( 3) 00:07:52.793 19.988 - 20.086: 99.8080% ( 1) 00:07:52.793 20.185 - 20.283: 99.8138% ( 1) 00:07:52.793 20.283 - 20.382: 99.8196% ( 1) 00:07:52.793 20.480 - 20.578: 99.8254% ( 1) 00:07:52.793 20.578 - 20.677: 99.8312% ( 1) 00:07:52.793 20.677 - 20.775: 99.8429% ( 2) 00:07:52.793 21.071 - 21.169: 99.8487% ( 1) 00:07:52.793 21.366 - 21.465: 99.8545% ( 1) 00:07:52.793 21.957 - 22.055: 99.8603% ( 1) 00:07:52.793 22.055 - 22.154: 99.8836% ( 4) 00:07:52.793 22.252 - 22.351: 99.8953% ( 2) 00:07:52.793 22.351 - 22.449: 99.9011% ( 1) 00:07:52.793 22.646 - 22.745: 99.9069% ( 1) 00:07:52.793 22.745 - 22.843: 99.9127% ( 1) 00:07:52.793 23.237 - 23.335: 99.9185% ( 1) 00:07:52.793 24.123 - 24.222: 99.9243% ( 1) 00:07:52.793 24.222 - 24.320: 99.9302% ( 1) 00:07:52.793 24.714 - 24.812: 99.9360% ( 1) 00:07:52.793 28.357 - 28.554: 99.9418% ( 1) 00:07:52.793 28.948 - 29.145: 99.9476% ( 1) 00:07:52.793 32.492 - 32.689: 99.9534% ( 1) 00:07:52.793 37.415 - 37.612: 99.9593% ( 1) 00:07:52.793 44.898 - 45.095: 99.9651% ( 1) 00:07:52.793 50.806 - 51.200: 99.9709% ( 1) 00:07:52.793 56.320 - 56.714: 99.9767% ( 1) 00:07:52.793 57.108 - 57.502: 99.9825% ( 1) 00:07:52.793 88.222 - 88.615: 99.9884% ( 1) 00:07:52.793 352.886 - 354.462: 99.9942% ( 1) 00:07:52.793 415.902 - 419.052: 100.0000% ( 1) 00:07:52.793 00:07:52.793 Complete histogram 00:07:52.793 ================== 00:07:52.793 Range in us Cumulative Count 00:07:52.793 7.237 - 7.286: 0.1804% ( 31) 00:07:52.793 7.286 - 7.335: 4.4227% ( 729) 00:07:52.793 7.335 - 7.385: 22.7537% ( 3150) 00:07:52.793 7.385 - 7.434: 50.7798% ( 4816) 00:07:52.793 7.434 - 7.483: 71.8808% ( 3626) 00:07:52.793 7.483 - 7.532: 83.6359% ( 2020) 00:07:52.793 7.532 - 7.582: 89.7754% ( 1055) 00:07:52.793 7.582 - 7.631: 93.3368% ( 612) 00:07:52.793 7.631 - 7.680: 95.3794% ( 351) 00:07:52.793 7.680 - 7.729: 96.5899% ( 208) 00:07:52.793 7.729 - 7.778: 97.1892% ( 103) 00:07:52.793 7.778 - 7.828: 97.5908% ( 69) 00:07:52.793 7.828 - 7.877: 97.8527% ( 45) 00:07:52.793 7.877 - 7.926: 98.0912% ( 41) 00:07:52.793 7.926 - 7.975: 98.2193% ( 22) 00:07:52.793 7.975 - 8.025: 98.3066% ( 15) 00:07:52.793 8.025 - 8.074: 98.3648% ( 10) 00:07:52.793 8.074 - 8.123: 98.4230% ( 10) 00:07:52.793 8.123 - 8.172: 98.4346% ( 2) 00:07:52.793 8.172 - 8.222: 98.4462% ( 2) 00:07:52.793 8.222 - 8.271: 98.4637% ( 3) 00:07:52.793 8.271 - 8.320: 98.4695% ( 1) 00:07:52.793 8.320 - 8.369: 98.4753% ( 1) 00:07:52.793 8.369 - 8.418: 98.4811% ( 1) 00:07:52.793 8.812 - 8.862: 98.4870% ( 1) 00:07:52.793 8.960 - 9.009: 98.4928% ( 1) 00:07:52.793 9.206 - 9.255: 98.4986% ( 1) 00:07:52.793 9.305 - 9.354: 98.5044% ( 1) 00:07:52.793 9.403 - 9.452: 98.5102% ( 1) 00:07:52.793 9.452 - 9.502: 98.5161% ( 1) 00:07:52.793 10.289 - 10.338: 98.5219% ( 1) 00:07:52.793 10.535 - 10.585: 98.5335% ( 2) 00:07:52.793 10.782 - 10.831: 98.5393% ( 1) 00:07:52.793 10.880 - 10.929: 98.5452% ( 1) 00:07:52.793 11.028 - 11.077: 98.5568% ( 2) 00:07:52.793 11.274 - 11.323: 98.5626% ( 1) 00:07:52.793 12.012 - 12.062: 98.5684% ( 1) 00:07:52.793 12.505 - 12.554: 98.5743% ( 1) 00:07:52.793 12.554 - 12.603: 98.5801% ( 1) 00:07:52.793 12.603 - 12.702: 98.5859% ( 1) 00:07:52.793 12.702 - 12.800: 98.5917% ( 1) 00:07:52.793 12.800 - 12.898: 98.6034% ( 2) 00:07:52.793 12.898 - 12.997: 98.6266% ( 4) 00:07:52.793 12.997 - 13.095: 98.7430% ( 20) 00:07:52.793 13.095 - 13.194: 98.8827% ( 24) 00:07:52.793 13.194 - 13.292: 98.9874% ( 18) 00:07:52.793 13.292 - 13.391: 99.0689% ( 14) 00:07:52.793 13.391 - 13.489: 99.1678% ( 17) 00:07:52.793 13.489 - 13.588: 99.2668% ( 17) 00:07:52.793 13.588 - 13.686: 99.3599% ( 16) 00:07:52.793 13.686 - 13.785: 99.4239% ( 11) 00:07:52.793 13.785 - 13.883: 99.4704% ( 8) 00:07:52.793 13.883 - 13.982: 99.5170% ( 8) 00:07:52.793 13.982 - 14.080: 99.5519% ( 6) 00:07:52.793 14.080 - 14.178: 99.5868% ( 6) 00:07:52.793 14.178 - 14.277: 99.6159% ( 5) 00:07:52.793 14.375 - 14.474: 99.6683% ( 9) 00:07:52.793 14.474 - 14.572: 99.6741% ( 1) 00:07:52.793 14.572 - 14.671: 99.6916% ( 3) 00:07:52.793 14.671 - 14.769: 99.7032% ( 2) 00:07:52.793 14.769 - 14.868: 99.7149% ( 2) 00:07:52.793 14.868 - 14.966: 99.7265% ( 2) 00:07:52.793 14.966 - 15.065: 99.7323% ( 1) 00:07:52.793 15.065 - 15.163: 99.7498% ( 3) 00:07:52.793 15.458 - 15.557: 99.7614% ( 2) 00:07:52.793 15.655 - 15.754: 99.7789% ( 3) 00:07:52.793 15.754 - 15.852: 99.7847% ( 1) 00:07:52.793 15.951 - 16.049: 99.7963% ( 2) 00:07:52.793 16.148 - 16.246: 99.8080% ( 2) 00:07:52.793 16.345 - 16.443: 99.8138% ( 1) 00:07:52.793 16.443 - 16.542: 99.8196% ( 1) 00:07:52.793 17.034 - 17.132: 99.8312% ( 2) 00:07:52.793 17.132 - 17.231: 99.8371% ( 1) 00:07:52.793 17.428 - 17.526: 99.8429% ( 1) 00:07:52.793 17.526 - 17.625: 99.8487% ( 1) 00:07:52.793 17.625 - 17.723: 99.8545% ( 1) 00:07:52.793 17.822 - 17.920: 99.8836% ( 5) 00:07:52.793 17.920 - 18.018: 99.8894% ( 1) 00:07:52.793 18.215 - 18.314: 99.8953% ( 1) 00:07:52.793 18.314 - 18.412: 99.9069% ( 2) 00:07:52.793 18.511 - 18.609: 99.9127% ( 1) 00:07:52.793 18.806 - 18.905: 99.9243% ( 2) 00:07:52.793 19.692 - 19.791: 99.9360% ( 2) 00:07:52.793 19.791 - 19.889: 99.9534% ( 3) 00:07:52.793 20.185 - 20.283: 99.9593% ( 1) 00:07:52.793 20.283 - 20.382: 99.9651% ( 1) 00:07:52.793 22.154 - 22.252: 99.9709% ( 1) 00:07:52.793 24.025 - 24.123: 99.9767% ( 1) 00:07:52.793 34.855 - 35.052: 99.9825% ( 1) 00:07:52.793 43.717 - 43.914: 99.9884% ( 1) 00:07:52.793 64.197 - 64.591: 99.9942% ( 1) 00:07:52.793 73.649 - 74.043: 100.0000% ( 1) 00:07:52.793 00:07:52.793 00:07:52.793 real 0m1.215s 00:07:52.793 user 0m1.067s 00:07:52.793 sys 0m0.099s 00:07:52.793 02:52:23 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.793 02:52:23 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:52.793 ************************************ 00:07:52.793 END TEST nvme_overhead 00:07:52.793 ************************************ 00:07:52.793 02:52:23 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:52.793 02:52:23 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:52.793 02:52:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.793 02:52:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.793 ************************************ 00:07:52.793 START TEST nvme_arbitration 00:07:52.793 ************************************ 00:07:52.793 02:52:23 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:56.074 Initializing NVMe Controllers 00:07:56.074 Attached to 0000:00:11.0 00:07:56.074 Attached to 0000:00:13.0 00:07:56.074 Attached to 0000:00:10.0 00:07:56.074 Attached to 0000:00:12.0 00:07:56.074 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:56.074 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:56.074 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:56.074 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:56.074 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:56.074 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:56.074 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:56.074 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:56.074 Initialization complete. Launching workers. 00:07:56.074 Starting thread on core 1 with urgent priority queue 00:07:56.074 Starting thread on core 2 with urgent priority queue 00:07:56.074 Starting thread on core 3 with urgent priority queue 00:07:56.074 Starting thread on core 0 with urgent priority queue 00:07:56.074 QEMU NVMe Ctrl (12341 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:56.074 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:07:56.074 QEMU NVMe Ctrl (12343 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:07:56.074 QEMU NVMe Ctrl (12342 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:07:56.074 QEMU NVMe Ctrl (12340 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:56.074 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:56.074 ======================================================== 00:07:56.074 00:07:56.074 00:07:56.074 real 0m3.340s 00:07:56.074 user 0m9.300s 00:07:56.074 sys 0m0.121s 00:07:56.074 02:52:26 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.074 02:52:26 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:56.074 ************************************ 00:07:56.074 END TEST nvme_arbitration 00:07:56.074 ************************************ 00:07:56.074 02:52:26 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:56.074 02:52:26 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.074 02:52:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.074 02:52:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.074 ************************************ 00:07:56.074 START TEST nvme_single_aen 00:07:56.074 ************************************ 00:07:56.074 02:52:26 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:56.332 Asynchronous Event Request test 00:07:56.332 Attached to 0000:00:11.0 00:07:56.332 Attached to 0000:00:13.0 00:07:56.332 Attached to 0000:00:10.0 00:07:56.332 Attached to 0000:00:12.0 00:07:56.332 Reset controller to setup AER completions for this process 00:07:56.332 Registering asynchronous event callbacks... 00:07:56.332 Getting orig temperature thresholds of all controllers 00:07:56.332 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.332 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.332 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.332 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.332 Setting all controllers temperature threshold low to trigger AER 00:07:56.332 Waiting for all controllers temperature threshold to be set lower 00:07:56.332 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.332 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:56.332 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.332 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:56.332 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.332 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:56.332 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.332 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:56.332 Waiting for all controllers to trigger AER and reset threshold 00:07:56.332 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.332 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.332 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.333 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.333 Cleaning up... 00:07:56.333 00:07:56.333 real 0m0.235s 00:07:56.333 user 0m0.084s 00:07:56.333 sys 0m0.104s 00:07:56.333 02:52:26 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.333 ************************************ 00:07:56.333 END TEST nvme_single_aen 00:07:56.333 ************************************ 00:07:56.333 02:52:26 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:56.333 02:52:26 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:56.333 02:52:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.333 02:52:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.333 02:52:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.333 ************************************ 00:07:56.333 START TEST nvme_doorbell_aers 00:07:56.333 ************************************ 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.333 02:52:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:56.333 02:52:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:56.333 02:52:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:56.333 02:52:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:56.333 02:52:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:56.591 [2024-12-05 02:52:27.240811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:06.557 Executing: test_write_invalid_db 00:08:06.557 Waiting for AER completion... 00:08:06.557 Failure: test_write_invalid_db 00:08:06.557 00:08:06.557 Executing: test_invalid_db_write_overflow_sq 00:08:06.557 Waiting for AER completion... 00:08:06.557 Failure: test_invalid_db_write_overflow_sq 00:08:06.557 00:08:06.557 Executing: test_invalid_db_write_overflow_cq 00:08:06.557 Waiting for AER completion... 00:08:06.557 Failure: test_invalid_db_write_overflow_cq 00:08:06.557 00:08:06.557 02:52:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.557 02:52:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:06.557 [2024-12-05 02:52:37.311206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:16.532 Executing: test_write_invalid_db 00:08:16.532 Waiting for AER completion... 00:08:16.532 Failure: test_write_invalid_db 00:08:16.532 00:08:16.532 Executing: test_invalid_db_write_overflow_sq 00:08:16.532 Waiting for AER completion... 00:08:16.532 Failure: test_invalid_db_write_overflow_sq 00:08:16.532 00:08:16.532 Executing: test_invalid_db_write_overflow_cq 00:08:16.532 Waiting for AER completion... 00:08:16.532 Failure: test_invalid_db_write_overflow_cq 00:08:16.532 00:08:16.532 02:52:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.532 02:52:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:16.532 [2024-12-05 02:52:47.320397] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:26.507 Executing: test_write_invalid_db 00:08:26.507 Waiting for AER completion... 00:08:26.507 Failure: test_write_invalid_db 00:08:26.507 00:08:26.507 Executing: test_invalid_db_write_overflow_sq 00:08:26.507 Waiting for AER completion... 00:08:26.507 Failure: test_invalid_db_write_overflow_sq 00:08:26.507 00:08:26.507 Executing: test_invalid_db_write_overflow_cq 00:08:26.507 Waiting for AER completion... 00:08:26.507 Failure: test_invalid_db_write_overflow_cq 00:08:26.507 00:08:26.507 02:52:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:26.507 02:52:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:26.766 [2024-12-05 02:52:57.360008] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 Executing: test_write_invalid_db 00:08:36.732 Waiting for AER completion... 00:08:36.732 Failure: test_write_invalid_db 00:08:36.732 00:08:36.732 Executing: test_invalid_db_write_overflow_sq 00:08:36.732 Waiting for AER completion... 00:08:36.732 Failure: test_invalid_db_write_overflow_sq 00:08:36.732 00:08:36.732 Executing: test_invalid_db_write_overflow_cq 00:08:36.732 Waiting for AER completion... 00:08:36.732 Failure: test_invalid_db_write_overflow_cq 00:08:36.732 00:08:36.732 00:08:36.732 real 0m40.211s 00:08:36.732 user 0m34.194s 00:08:36.732 sys 0m5.633s 00:08:36.732 02:53:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.732 ************************************ 00:08:36.732 END TEST nvme_doorbell_aers 00:08:36.732 02:53:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:36.732 ************************************ 00:08:36.732 02:53:07 nvme -- nvme/nvme.sh@97 -- # uname 00:08:36.732 02:53:07 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:36.732 02:53:07 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:36.732 02:53:07 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:36.732 02:53:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.732 02:53:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.732 ************************************ 00:08:36.732 START TEST nvme_multi_aen 00:08:36.732 ************************************ 00:08:36.732 02:53:07 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:36.732 [2024-12-05 02:53:07.425300] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.425352] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.425362] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.426408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.426429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.426437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.427414] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.427448] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.427456] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.428266] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.428290] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 [2024-12-05 02:53:07.428297] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63181) is not found. Dropping the request. 00:08:36.732 Child process pid: 63702 00:08:36.990 [Child] Asynchronous Event Request test 00:08:36.990 [Child] Attached to 0000:00:11.0 00:08:36.990 [Child] Attached to 0000:00:13.0 00:08:36.990 [Child] Attached to 0000:00:10.0 00:08:36.990 [Child] Attached to 0000:00:12.0 00:08:36.990 [Child] Registering asynchronous event callbacks... 00:08:36.990 [Child] Getting orig temperature thresholds of all controllers 00:08:36.990 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:36.990 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.990 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.990 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.990 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.990 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.990 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.990 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.990 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.990 [Child] Cleaning up... 00:08:36.990 Asynchronous Event Request test 00:08:36.990 Attached to 0000:00:11.0 00:08:36.990 Attached to 0000:00:13.0 00:08:36.990 Attached to 0000:00:10.0 00:08:36.990 Attached to 0000:00:12.0 00:08:36.990 Reset controller to setup AER completions for this process 00:08:36.990 Registering asynchronous event callbacks... 00:08:36.990 Getting orig temperature thresholds of all controllers 00:08:36.990 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.990 Setting all controllers temperature threshold low to trigger AER 00:08:36.990 Waiting for all controllers temperature threshold to be set lower 00:08:36.991 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.991 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:36.991 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.991 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:36.991 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.991 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:36.991 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.991 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:36.991 Waiting for all controllers to trigger AER and reset threshold 00:08:36.991 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.991 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.991 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.991 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.991 Cleaning up... 00:08:36.991 00:08:36.991 real 0m0.431s 00:08:36.991 user 0m0.138s 00:08:36.991 sys 0m0.182s 00:08:36.991 02:53:07 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.991 02:53:07 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 ************************************ 00:08:36.991 END TEST nvme_multi_aen 00:08:36.991 ************************************ 00:08:36.991 02:53:07 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:36.991 02:53:07 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:36.991 02:53:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.991 02:53:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.991 ************************************ 00:08:36.991 START TEST nvme_startup 00:08:36.991 ************************************ 00:08:36.991 02:53:07 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:37.249 Initializing NVMe Controllers 00:08:37.249 Attached to 0000:00:11.0 00:08:37.249 Attached to 0000:00:13.0 00:08:37.249 Attached to 0000:00:10.0 00:08:37.249 Attached to 0000:00:12.0 00:08:37.249 Initialization complete. 00:08:37.249 Time used:150090.812 (us). 00:08:37.249 00:08:37.249 real 0m0.212s 00:08:37.249 user 0m0.066s 00:08:37.249 sys 0m0.100s 00:08:37.249 02:53:07 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.249 02:53:07 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 ************************************ 00:08:37.249 END TEST nvme_startup 00:08:37.249 ************************************ 00:08:37.249 02:53:07 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:37.249 02:53:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.249 02:53:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.249 02:53:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 ************************************ 00:08:37.249 START TEST nvme_multi_secondary 00:08:37.249 ************************************ 00:08:37.249 02:53:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:37.250 02:53:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63752 00:08:37.250 02:53:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63753 00:08:37.250 02:53:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:37.250 02:53:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:37.250 02:53:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:40.550 Initializing NVMe Controllers 00:08:40.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:40.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:40.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:40.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:40.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:40.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:40.550 Initialization complete. Launching workers. 00:08:40.550 ======================================================== 00:08:40.550 Latency(us) 00:08:40.550 Device Information : IOPS MiB/s Average min max 00:08:40.550 PCIE (0000:00:11.0) NSID 1 from core 2: 2121.99 8.29 7539.05 1000.96 16176.31 00:08:40.550 PCIE (0000:00:13.0) NSID 1 from core 2: 2121.99 8.29 7539.96 981.35 17131.48 00:08:40.550 PCIE (0000:00:10.0) NSID 1 from core 2: 2121.99 8.29 7538.96 979.99 17242.81 00:08:40.550 PCIE (0000:00:12.0) NSID 1 from core 2: 2121.99 8.29 7539.47 961.31 16518.59 00:08:40.550 PCIE (0000:00:12.0) NSID 2 from core 2: 2121.99 8.29 7541.14 1016.78 19714.13 00:08:40.550 PCIE (0000:00:12.0) NSID 3 from core 2: 2121.99 8.29 7541.22 961.95 16607.72 00:08:40.550 ======================================================== 00:08:40.550 Total : 12731.92 49.73 7539.96 961.31 19714.13 00:08:40.550 00:08:40.550 Initializing NVMe Controllers 00:08:40.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:40.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:40.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:40.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:40.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:40.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:40.550 Initialization complete. Launching workers. 00:08:40.550 ======================================================== 00:08:40.550 Latency(us) 00:08:40.550 Device Information : IOPS MiB/s Average min max 00:08:40.550 PCIE (0000:00:11.0) NSID 1 from core 1: 5176.65 20.22 3090.33 803.14 9064.18 00:08:40.550 PCIE (0000:00:13.0) NSID 1 from core 1: 5176.65 20.22 3090.48 814.42 8579.97 00:08:40.550 PCIE (0000:00:10.0) NSID 1 from core 1: 5176.65 20.22 3089.45 787.05 8599.66 00:08:40.550 PCIE (0000:00:12.0) NSID 1 from core 1: 5176.65 20.22 3090.57 811.83 9131.63 00:08:40.550 PCIE (0000:00:12.0) NSID 2 from core 1: 5176.65 20.22 3090.53 815.82 8591.67 00:08:40.550 PCIE (0000:00:12.0) NSID 3 from core 1: 5176.65 20.22 3090.60 814.05 8116.67 00:08:40.550 ======================================================== 00:08:40.550 Total : 31059.90 121.33 3090.33 787.05 9131.63 00:08:40.550 00:08:40.550 02:53:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63752 00:08:43.095 Initializing NVMe Controllers 00:08:43.095 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.095 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.095 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.095 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.095 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.095 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.095 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.095 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.095 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.095 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.095 Initialization complete. Launching workers. 00:08:43.095 ======================================================== 00:08:43.095 Latency(us) 00:08:43.095 Device Information : IOPS MiB/s Average min max 00:08:43.095 PCIE (0000:00:11.0) NSID 1 from core 0: 7681.27 30.00 2082.58 841.69 7872.33 00:08:43.095 PCIE (0000:00:13.0) NSID 1 from core 0: 7681.27 30.00 2082.60 851.38 7920.96 00:08:43.095 PCIE (0000:00:10.0) NSID 1 from core 0: 7681.27 30.00 2081.66 857.26 8451.21 00:08:43.095 PCIE (0000:00:12.0) NSID 1 from core 0: 7681.27 30.00 2082.56 865.71 8778.48 00:08:43.095 PCIE (0000:00:12.0) NSID 2 from core 0: 7681.27 30.00 2082.54 780.22 8903.93 00:08:43.095 PCIE (0000:00:12.0) NSID 3 from core 0: 7681.27 30.00 2082.53 684.88 8261.06 00:08:43.095 ======================================================== 00:08:43.095 Total : 46087.62 180.03 2082.41 684.88 8903.93 00:08:43.095 00:08:43.095 02:53:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63753 00:08:43.095 02:53:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63828 00:08:43.095 02:53:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:43.095 02:53:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63829 00:08:43.095 02:53:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:43.095 02:53:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:46.390 Initializing NVMe Controllers 00:08:46.390 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.390 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.390 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.390 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.390 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.390 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.390 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.390 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.390 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.390 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.390 Initialization complete. Launching workers. 00:08:46.390 ======================================================== 00:08:46.390 Latency(us) 00:08:46.390 Device Information : IOPS MiB/s Average min max 00:08:46.390 PCIE (0000:00:11.0) NSID 1 from core 0: 5570.21 21.76 2871.96 987.43 7230.33 00:08:46.390 PCIE (0000:00:13.0) NSID 1 from core 0: 5570.21 21.76 2872.12 1012.64 7187.00 00:08:46.390 PCIE (0000:00:10.0) NSID 1 from core 0: 5570.21 21.76 2871.20 998.86 7707.58 00:08:46.390 PCIE (0000:00:12.0) NSID 1 from core 0: 5570.21 21.76 2872.19 1023.58 7060.93 00:08:46.390 PCIE (0000:00:12.0) NSID 2 from core 0: 5570.21 21.76 2872.21 1047.27 6766.83 00:08:46.390 PCIE (0000:00:12.0) NSID 3 from core 0: 5570.21 21.76 2872.20 1050.56 6420.88 00:08:46.390 ======================================================== 00:08:46.390 Total : 33421.27 130.55 2871.98 987.43 7707.58 00:08:46.390 00:08:46.390 Initializing NVMe Controllers 00:08:46.390 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.390 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.390 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.390 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.390 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:46.390 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:46.390 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:46.390 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:46.390 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:46.390 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:46.390 Initialization complete. Launching workers. 00:08:46.390 ======================================================== 00:08:46.390 Latency(us) 00:08:46.390 Device Information : IOPS MiB/s Average min max 00:08:46.390 PCIE (0000:00:11.0) NSID 1 from core 1: 5567.56 21.75 2873.32 1113.06 7683.21 00:08:46.390 PCIE (0000:00:13.0) NSID 1 from core 1: 5567.56 21.75 2873.30 1136.65 6959.20 00:08:46.390 PCIE (0000:00:10.0) NSID 1 from core 1: 5567.56 21.75 2872.25 1112.45 6113.93 00:08:46.390 PCIE (0000:00:12.0) NSID 1 from core 1: 5567.56 21.75 2873.21 1111.40 6296.67 00:08:46.390 PCIE (0000:00:12.0) NSID 2 from core 1: 5567.56 21.75 2873.18 967.18 6801.68 00:08:46.390 PCIE (0000:00:12.0) NSID 3 from core 1: 5567.56 21.75 2873.18 866.36 7835.52 00:08:46.390 ======================================================== 00:08:46.390 Total : 33405.35 130.49 2873.07 866.36 7835.52 00:08:46.390 00:08:48.305 Initializing NVMe Controllers 00:08:48.305 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.305 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.305 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.305 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.305 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:48.305 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:48.305 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:48.305 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:48.305 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:48.305 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:48.305 Initialization complete. Launching workers. 00:08:48.305 ======================================================== 00:08:48.305 Latency(us) 00:08:48.305 Device Information : IOPS MiB/s Average min max 00:08:48.305 PCIE (0000:00:11.0) NSID 1 from core 2: 3509.90 13.71 4558.17 871.85 13258.21 00:08:48.305 PCIE (0000:00:13.0) NSID 1 from core 2: 3509.90 13.71 4558.28 810.08 16489.41 00:08:48.305 PCIE (0000:00:10.0) NSID 1 from core 2: 3509.90 13.71 4557.19 823.79 13280.74 00:08:48.305 PCIE (0000:00:12.0) NSID 1 from core 2: 3509.90 13.71 4558.12 845.79 13454.30 00:08:48.306 PCIE (0000:00:12.0) NSID 2 from core 2: 3509.90 13.71 4557.81 846.61 14106.28 00:08:48.306 PCIE (0000:00:12.0) NSID 3 from core 2: 3509.90 13.71 4557.95 858.38 14128.61 00:08:48.306 ======================================================== 00:08:48.306 Total : 21059.38 82.26 4557.92 810.08 16489.41 00:08:48.306 00:08:48.306 02:53:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63828 00:08:48.306 02:53:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63829 00:08:48.306 00:08:48.306 real 0m10.901s 00:08:48.306 user 0m18.377s 00:08:48.306 sys 0m0.698s 00:08:48.306 02:53:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.306 02:53:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:48.306 ************************************ 00:08:48.306 END TEST nvme_multi_secondary 00:08:48.306 ************************************ 00:08:48.306 02:53:18 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:48.306 02:53:18 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:48.306 02:53:18 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62785 ]] 00:08:48.306 02:53:18 nvme -- common/autotest_common.sh@1094 -- # kill 62785 00:08:48.306 02:53:18 nvme -- common/autotest_common.sh@1095 -- # wait 62785 00:08:48.306 [2024-12-05 02:53:18.881923] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.881982] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.882003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.882015] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.883753] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.883793] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.883805] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.883818] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.885515] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.885554] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.885565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.885577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.887269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.887306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.887318] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 [2024-12-05 02:53:18.887330] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:08:48.306 02:53:19 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:48.306 02:53:19 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:48.306 02:53:19 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.306 02:53:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.306 02:53:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.306 02:53:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.306 ************************************ 00:08:48.306 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:48.306 ************************************ 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.306 * Looking for test storage... 00:08:48.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.306 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:48.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.568 --rc genhtml_branch_coverage=1 00:08:48.568 --rc genhtml_function_coverage=1 00:08:48.568 --rc genhtml_legend=1 00:08:48.568 --rc geninfo_all_blocks=1 00:08:48.568 --rc geninfo_unexecuted_blocks=1 00:08:48.568 00:08:48.568 ' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:48.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.568 --rc genhtml_branch_coverage=1 00:08:48.568 --rc genhtml_function_coverage=1 00:08:48.568 --rc genhtml_legend=1 00:08:48.568 --rc geninfo_all_blocks=1 00:08:48.568 --rc geninfo_unexecuted_blocks=1 00:08:48.568 00:08:48.568 ' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:48.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.568 --rc genhtml_branch_coverage=1 00:08:48.568 --rc genhtml_function_coverage=1 00:08:48.568 --rc genhtml_legend=1 00:08:48.568 --rc geninfo_all_blocks=1 00:08:48.568 --rc geninfo_unexecuted_blocks=1 00:08:48.568 00:08:48.568 ' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:48.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.568 --rc genhtml_branch_coverage=1 00:08:48.568 --rc genhtml_function_coverage=1 00:08:48.568 --rc genhtml_legend=1 00:08:48.568 --rc geninfo_all_blocks=1 00:08:48.568 --rc geninfo_unexecuted_blocks=1 00:08:48.568 00:08:48.568 ' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63988 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63988 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63988 ']' 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.568 02:53:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.568 [2024-12-05 02:53:19.292587] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:48.569 [2024-12-05 02:53:19.292822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63988 ] 00:08:48.830 [2024-12-05 02:53:19.463160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.830 [2024-12-05 02:53:19.562055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.830 [2024-12-05 02:53:19.562202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.830 [2024-12-05 02:53:19.562508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.830 [2024-12-05 02:53:19.562536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.418 nvme0n1 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_NnuJG.txt 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.418 true 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733367200 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64009 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:49.418 02:53:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.024 [2024-12-05 02:53:22.245561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:52.024 [2024-12-05 02:53:22.245815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:52.024 [2024-12-05 02:53:22.245838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:52.024 [2024-12-05 02:53:22.245851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:52.024 [2024-12-05 02:53:22.247505] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller sWaiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64009 00:08:52.024 uccessful. 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64009 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64009 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_NnuJG.txt 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_NnuJG.txt 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63988 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63988 ']' 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63988 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63988 00:08:52.024 killing process with pid 63988 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63988' 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63988 00:08:52.024 02:53:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63988 00:08:52.967 02:53:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:52.967 02:53:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:52.967 00:08:52.967 real 0m4.526s 00:08:52.967 user 0m16.043s 00:08:52.967 sys 0m0.505s 00:08:52.967 ************************************ 00:08:52.967 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:52.967 ************************************ 00:08:52.967 02:53:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.967 02:53:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.967 02:53:23 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:52.967 02:53:23 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:52.967 02:53:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.967 02:53:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.967 02:53:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.967 ************************************ 00:08:52.967 START TEST nvme_fio 00:08:52.967 ************************************ 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:52.967 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:52.967 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:52.967 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.967 02:53:23 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.967 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:52.968 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:52.968 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:52.968 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:52.968 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:53.228 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:53.228 02:53:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:53.489 02:53:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:53.489 02:53:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:53.489 02:53:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:53.489 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:53.489 fio-3.35 00:08:53.489 Starting 1 thread 00:09:00.092 00:09:00.092 test: (groupid=0, jobs=1): err= 0: pid=64150: Thu Dec 5 02:53:30 2024 00:09:00.092 read: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec) 00:09:00.092 slat (nsec): min=3341, max=92963, avg=5179.88, stdev=2466.52 00:09:00.092 clat (usec): min=308, max=8182, avg=2926.72, stdev=973.81 00:09:00.092 lat (usec): min=312, max=8186, avg=2931.90, stdev=975.29 00:09:00.092 clat percentiles (usec): 00:09:00.092 | 1.00th=[ 1811], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:00.092 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:00.092 | 70.00th=[ 2802], 80.00th=[ 3097], 90.00th=[ 4080], 95.00th=[ 5473], 00:09:00.092 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7111], 99.95th=[ 7177], 00:09:00.092 | 99.99th=[ 7767] 00:09:00.092 bw ( KiB/s): min=79120, max=93584, per=100.00%, avg=87770.67, stdev=7638.04, samples=3 00:09:00.092 iops : min=19780, max=23396, avg=21942.67, stdev=1909.51, samples=3 00:09:00.092 write: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:09:00.092 slat (nsec): min=3476, max=62403, avg=5442.29, stdev=2515.97 00:09:00.092 clat (usec): min=337, max=8254, avg=2938.99, stdev=986.17 00:09:00.092 lat (usec): min=342, max=8259, avg=2944.43, stdev=987.67 00:09:00.092 clat percentiles (usec): 00:09:00.092 | 1.00th=[ 1795], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:00.092 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:00.092 | 70.00th=[ 2802], 80.00th=[ 3097], 90.00th=[ 4113], 95.00th=[ 5538], 00:09:00.092 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 7242], 00:09:00.092 | 99.99th=[ 7373] 00:09:00.092 bw ( KiB/s): min=81080, max=92736, per=100.00%, avg=87984.00, stdev=6118.73, samples=3 00:09:00.092 iops : min=20270, max=23184, avg=21996.00, stdev=1529.68, samples=3 00:09:00.092 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:09:00.092 lat (msec) : 2=1.93%, 4=87.26%, 10=10.75% 00:09:00.092 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=607 00:09:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.092 issued rwts: total=43644,43337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.092 00:09:00.092 Run status group 0 (all jobs): 00:09:00.092 READ: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:09:00.092 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (178MB), run=2001-2001msec 00:09:00.092 ----------------------------------------------------- 00:09:00.092 Suppressions used: 00:09:00.092 count bytes template 00:09:00.092 1 32 /usr/src/fio/parse.c 00:09:00.092 1 8 libtcmalloc_minimal.so 00:09:00.092 ----------------------------------------------------- 00:09:00.092 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.092 02:53:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.092 02:53:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.093 fio-3.35 00:09:00.093 Starting 1 thread 00:09:08.236 00:09:08.236 test: (groupid=0, jobs=1): err= 0: pid=64208: Thu Dec 5 02:53:38 2024 00:09:08.236 read: IOPS=24.2k, BW=94.4MiB/s (99.0MB/s)(189MiB/2001msec) 00:09:08.236 slat (nsec): min=3333, max=58780, avg=4895.76, stdev=2034.98 00:09:08.236 clat (usec): min=489, max=8340, avg=2644.48, stdev=811.98 00:09:08.236 lat (usec): min=493, max=8344, avg=2649.37, stdev=813.26 00:09:08.236 clat percentiles (usec): 00:09:08.236 | 1.00th=[ 1893], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2245], 00:09:08.236 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:08.236 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3097], 95.00th=[ 4424], 00:09:08.236 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7701], 00:09:08.236 | 99.99th=[ 8160] 00:09:08.236 bw ( KiB/s): min=97013, max=98275, per=100.00%, avg=97637.33, stdev=631.11, samples=3 00:09:08.236 iops : min=24253, max=24568, avg=24409.00, stdev=157.52, samples=3 00:09:08.236 write: IOPS=24.0k, BW=93.8MiB/s (98.3MB/s)(188MiB/2001msec); 0 zone resets 00:09:08.236 slat (nsec): min=3463, max=56152, avg=5132.75, stdev=2041.34 00:09:08.236 clat (usec): min=481, max=8402, avg=2648.45, stdev=803.69 00:09:08.236 lat (usec): min=485, max=8407, avg=2653.58, stdev=804.96 00:09:08.236 clat percentiles (usec): 00:09:08.236 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:09:08.236 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:08.236 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3064], 95.00th=[ 4424], 00:09:08.236 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7832], 00:09:08.236 | 99.99th=[ 8225] 00:09:08.236 bw ( KiB/s): min=96574, max=98256, per=100.00%, avg=97613.67, stdev=908.67, samples=3 00:09:08.236 iops : min=24143, max=24564, avg=24403.00, stdev=227.29, samples=3 00:09:08.236 lat (usec) : 500=0.01%, 750=0.01% 00:09:08.236 lat (msec) : 2=2.33%, 4=91.55%, 10=6.12% 00:09:08.236 cpu : usr=99.30%, sys=0.05%, ctx=6, majf=0, minf=606 00:09:08.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.236 issued rwts: total=48356,48032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.236 00:09:08.236 Run status group 0 (all jobs): 00:09:08.236 READ: bw=94.4MiB/s (99.0MB/s), 94.4MiB/s-94.4MiB/s (99.0MB/s-99.0MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:08.236 WRITE: bw=93.8MiB/s (98.3MB/s), 93.8MiB/s-93.8MiB/s (98.3MB/s-98.3MB/s), io=188MiB (197MB), run=2001-2001msec 00:09:08.236 ----------------------------------------------------- 00:09:08.236 Suppressions used: 00:09:08.236 count bytes template 00:09:08.236 1 32 /usr/src/fio/parse.c 00:09:08.236 1 8 libtcmalloc_minimal.so 00:09:08.236 ----------------------------------------------------- 00:09:08.236 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.237 02:53:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.237 02:53:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:08.237 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.237 fio-3.35 00:09:08.237 Starting 1 thread 00:09:16.373 00:09:16.373 test: (groupid=0, jobs=1): err= 0: pid=64267: Thu Dec 5 02:53:45 2024 00:09:16.373 read: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(182MiB/2001msec) 00:09:16.373 slat (nsec): min=4210, max=79297, avg=5081.71, stdev=2322.86 00:09:16.373 clat (usec): min=744, max=9606, avg=2748.34, stdev=870.21 00:09:16.373 lat (usec): min=756, max=9640, avg=2753.42, stdev=871.71 00:09:16.373 clat percentiles (usec): 00:09:16.373 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:16.373 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:16.373 | 70.00th=[ 2540], 80.00th=[ 2671], 90.00th=[ 3621], 95.00th=[ 5145], 00:09:16.373 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8094], 00:09:16.373 | 99.99th=[ 9372] 00:09:16.373 bw ( KiB/s): min=87856, max=94624, per=98.58%, avg=91565.33, stdev=3430.59, samples=3 00:09:16.373 iops : min=21964, max=23656, avg=22891.33, stdev=857.65, samples=3 00:09:16.373 write: IOPS=23.1k, BW=90.2MiB/s (94.5MB/s)(180MiB/2001msec); 0 zone resets 00:09:16.373 slat (usec): min=4, max=121, avg= 5.40, stdev= 2.44 00:09:16.373 clat (usec): min=580, max=9396, avg=2761.04, stdev=885.70 00:09:16.373 lat (usec): min=592, max=9406, avg=2766.44, stdev=887.26 00:09:16.373 clat percentiles (usec): 00:09:16.373 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:09:16.373 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:16.373 | 70.00th=[ 2540], 80.00th=[ 2671], 90.00th=[ 3654], 95.00th=[ 5276], 00:09:16.373 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7898], 99.95th=[ 8094], 00:09:16.373 | 99.99th=[ 9110] 00:09:16.373 bw ( KiB/s): min=89576, max=94472, per=99.33%, avg=91701.33, stdev=2510.98, samples=3 00:09:16.373 iops : min=22394, max=23618, avg=22925.33, stdev=627.75, samples=3 00:09:16.373 lat (usec) : 750=0.01%, 1000=0.01% 00:09:16.373 lat (msec) : 2=1.00%, 4=90.36%, 10=8.63% 00:09:16.373 cpu : usr=99.30%, sys=0.00%, ctx=31, majf=0, minf=606 00:09:16.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.373 issued rwts: total=46467,46183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.373 00:09:16.373 Run status group 0 (all jobs): 00:09:16.373 READ: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=182MiB (190MB), run=2001-2001msec 00:09:16.373 WRITE: bw=90.2MiB/s (94.5MB/s), 90.2MiB/s-90.2MiB/s (94.5MB/s-94.5MB/s), io=180MiB (189MB), run=2001-2001msec 00:09:16.373 ----------------------------------------------------- 00:09:16.373 Suppressions used: 00:09:16.373 count bytes template 00:09:16.373 1 32 /usr/src/fio/parse.c 00:09:16.373 1 8 libtcmalloc_minimal.so 00:09:16.373 ----------------------------------------------------- 00:09:16.373 00:09:16.373 02:53:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.373 02:53:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.373 02:53:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.373 02:53:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.373 02:53:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.373 02:53:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.373 02:53:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.373 02:53:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.373 02:53:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.373 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.373 fio-3.35 00:09:16.373 Starting 1 thread 00:09:26.381 00:09:26.381 test: (groupid=0, jobs=1): err= 0: pid=64328: Thu Dec 5 02:53:56 2024 00:09:26.381 read: IOPS=23.8k, BW=93.1MiB/s (97.7MB/s)(186MiB/2001msec) 00:09:26.381 slat (nsec): min=4209, max=78223, avg=5039.98, stdev=2287.51 00:09:26.381 clat (usec): min=333, max=8658, avg=2677.01, stdev=795.66 00:09:26.381 lat (usec): min=337, max=8707, avg=2682.05, stdev=797.14 00:09:26.381 clat percentiles (usec): 00:09:26.381 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:26.381 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:26.381 | 70.00th=[ 2540], 80.00th=[ 2671], 90.00th=[ 3163], 95.00th=[ 4752], 00:09:26.381 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 7504], 00:09:26.381 | 99.99th=[ 8356] 00:09:26.381 bw ( KiB/s): min=88640, max=100936, per=98.36%, avg=93805.33, stdev=6379.25, samples=3 00:09:26.381 iops : min=22160, max=25234, avg=23451.33, stdev=1594.81, samples=3 00:09:26.381 write: IOPS=23.7k, BW=92.5MiB/s (97.0MB/s)(185MiB/2001msec); 0 zone resets 00:09:26.381 slat (nsec): min=4318, max=83579, avg=5313.37, stdev=2225.59 00:09:26.381 clat (usec): min=357, max=8384, avg=2685.59, stdev=806.32 00:09:26.381 lat (usec): min=361, max=8399, avg=2690.90, stdev=807.77 00:09:26.381 clat percentiles (usec): 00:09:26.381 | 1.00th=[ 1942], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:26.381 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:26.381 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 3195], 95.00th=[ 4817], 00:09:26.381 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7242], 99.95th=[ 7570], 00:09:26.381 | 99.99th=[ 8225] 00:09:26.381 bw ( KiB/s): min=88064, max=102048, per=99.06%, avg=93874.67, stdev=7285.24, samples=3 00:09:26.381 iops : min=22016, max=25512, avg=23468.67, stdev=1821.31, samples=3 00:09:26.381 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:09:26.381 lat (msec) : 2=1.23%, 4=91.84%, 10=6.89% 00:09:26.381 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=604 00:09:26.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:26.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.381 issued rwts: total=47707,47405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.381 00:09:26.381 Run status group 0 (all jobs): 00:09:26.381 READ: bw=93.1MiB/s (97.7MB/s), 93.1MiB/s-93.1MiB/s (97.7MB/s-97.7MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:26.381 WRITE: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:26.381 ----------------------------------------------------- 00:09:26.381 Suppressions used: 00:09:26.381 count bytes template 00:09:26.381 1 32 /usr/src/fio/parse.c 00:09:26.381 1 8 libtcmalloc_minimal.so 00:09:26.381 ----------------------------------------------------- 00:09:26.381 00:09:26.381 02:53:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:26.381 02:53:57 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:26.381 00:09:26.381 real 0m33.523s 00:09:26.381 user 0m19.936s 00:09:26.381 sys 0m25.110s 00:09:26.381 02:53:57 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.381 ************************************ 00:09:26.381 END TEST nvme_fio 00:09:26.381 ************************************ 00:09:26.381 02:53:57 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:26.381 00:09:26.381 real 1m42.887s 00:09:26.381 user 3m40.860s 00:09:26.381 sys 0m35.570s 00:09:26.381 02:53:57 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.381 02:53:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.381 ************************************ 00:09:26.381 END TEST nvme 00:09:26.381 ************************************ 00:09:26.381 02:53:57 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:26.381 02:53:57 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:26.381 02:53:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.381 02:53:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.381 02:53:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.381 ************************************ 00:09:26.381 START TEST nvme_scc 00:09:26.381 ************************************ 00:09:26.381 02:53:57 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:26.643 * Looking for test storage... 00:09:26.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.643 02:53:57 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.643 --rc genhtml_branch_coverage=1 00:09:26.643 --rc genhtml_function_coverage=1 00:09:26.643 --rc genhtml_legend=1 00:09:26.643 --rc geninfo_all_blocks=1 00:09:26.643 --rc geninfo_unexecuted_blocks=1 00:09:26.643 00:09:26.643 ' 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.643 --rc genhtml_branch_coverage=1 00:09:26.643 --rc genhtml_function_coverage=1 00:09:26.643 --rc genhtml_legend=1 00:09:26.643 --rc geninfo_all_blocks=1 00:09:26.643 --rc geninfo_unexecuted_blocks=1 00:09:26.643 00:09:26.643 ' 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.643 --rc genhtml_branch_coverage=1 00:09:26.643 --rc genhtml_function_coverage=1 00:09:26.643 --rc genhtml_legend=1 00:09:26.643 --rc geninfo_all_blocks=1 00:09:26.643 --rc geninfo_unexecuted_blocks=1 00:09:26.643 00:09:26.643 ' 00:09:26.643 02:53:57 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.643 --rc genhtml_branch_coverage=1 00:09:26.643 --rc genhtml_function_coverage=1 00:09:26.643 --rc genhtml_legend=1 00:09:26.643 --rc geninfo_all_blocks=1 00:09:26.643 --rc geninfo_unexecuted_blocks=1 00:09:26.643 00:09:26.643 ' 00:09:26.644 02:53:57 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.644 02:53:57 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.644 02:53:57 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.644 02:53:57 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.644 02:53:57 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.644 02:53:57 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.644 02:53:57 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.644 02:53:57 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.644 02:53:57 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:26.644 02:53:57 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:26.644 02:53:57 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:26.644 02:53:57 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.644 02:53:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:26.644 02:53:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:26.644 02:53:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:26.644 02:53:57 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:26.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.166 Waiting for block devices as requested 00:09:27.166 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.166 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.166 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.166 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.451 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.451 02:54:03 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.451 02:54:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.451 02:54:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.451 02:54:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.451 02:54:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.451 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.452 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.453 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.454 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.455 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.456 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.457 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.458 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.459 02:54:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.459 02:54:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.459 02:54:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.459 02:54:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.459 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.460 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.461 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.462 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.463 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:32.464 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.465 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.466 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.467 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.468 02:54:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.468 02:54:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.468 02:54:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.468 02:54:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.468 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.469 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.470 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.471 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:32.472 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.473 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:32.474 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:32.475 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.476 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:32.477 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.478 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.741 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.742 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.743 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.744 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.745 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.746 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.747 02:54:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.747 02:54:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.747 02:54:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.747 02:54:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.747 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.748 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.749 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.750 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.751 02:54:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:32.751 02:54:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:32.752 02:54:03 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:32.752 02:54:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:32.752 02:54:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:32.752 02:54:03 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:33.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.577 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.577 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.577 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.577 02:54:04 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:33.577 02:54:04 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.577 02:54:04 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.577 02:54:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:33.577 ************************************ 00:09:33.577 START TEST nvme_simple_copy 00:09:33.577 ************************************ 00:09:33.577 02:54:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:33.835 Initializing NVMe Controllers 00:09:33.835 Attaching to 0000:00:10.0 00:09:33.835 Controller supports SCC. Attached to 0000:00:10.0 00:09:33.835 Namespace ID: 1 size: 6GB 00:09:33.835 Initialization complete. 00:09:33.835 00:09:33.835 Controller QEMU NVMe Ctrl (12340 ) 00:09:33.835 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:33.835 Namespace Block Size:4096 00:09:33.835 Writing LBAs 0 to 63 with Random Data 00:09:33.835 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:33.835 LBAs matching Written Data: 64 00:09:33.835 00:09:33.835 real 0m0.252s 00:09:33.835 user 0m0.088s 00:09:33.835 sys 0m0.062s 00:09:33.835 ************************************ 00:09:33.835 END TEST nvme_simple_copy 00:09:33.835 02:54:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.835 02:54:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:33.835 ************************************ 00:09:34.094 00:09:34.094 real 0m7.490s 00:09:34.094 user 0m1.041s 00:09:34.094 sys 0m1.377s 00:09:34.094 02:54:04 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.094 02:54:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:34.094 ************************************ 00:09:34.094 END TEST nvme_scc 00:09:34.094 ************************************ 00:09:34.094 02:54:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:34.094 02:54:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:34.094 02:54:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:34.094 02:54:04 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:34.094 02:54:04 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:34.094 02:54:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.094 02:54:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.094 02:54:04 -- common/autotest_common.sh@10 -- # set +x 00:09:34.094 ************************************ 00:09:34.094 START TEST nvme_fdp 00:09:34.094 ************************************ 00:09:34.094 02:54:04 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:34.094 * Looking for test storage... 00:09:34.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.094 02:54:04 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.094 02:54:04 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.094 02:54:04 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.094 02:54:04 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:34.094 02:54:04 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:34.095 02:54:04 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.095 02:54:04 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.095 --rc genhtml_branch_coverage=1 00:09:34.095 --rc genhtml_function_coverage=1 00:09:34.095 --rc genhtml_legend=1 00:09:34.095 --rc geninfo_all_blocks=1 00:09:34.095 --rc geninfo_unexecuted_blocks=1 00:09:34.095 00:09:34.095 ' 00:09:34.095 02:54:04 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.095 --rc genhtml_branch_coverage=1 00:09:34.095 --rc genhtml_function_coverage=1 00:09:34.095 --rc genhtml_legend=1 00:09:34.095 --rc geninfo_all_blocks=1 00:09:34.095 --rc geninfo_unexecuted_blocks=1 00:09:34.095 00:09:34.095 ' 00:09:34.095 02:54:04 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.095 --rc genhtml_branch_coverage=1 00:09:34.095 --rc genhtml_function_coverage=1 00:09:34.095 --rc genhtml_legend=1 00:09:34.095 --rc geninfo_all_blocks=1 00:09:34.095 --rc geninfo_unexecuted_blocks=1 00:09:34.095 00:09:34.095 ' 00:09:34.095 02:54:04 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.095 --rc genhtml_branch_coverage=1 00:09:34.095 --rc genhtml_function_coverage=1 00:09:34.095 --rc genhtml_legend=1 00:09:34.095 --rc geninfo_all_blocks=1 00:09:34.095 --rc geninfo_unexecuted_blocks=1 00:09:34.095 00:09:34.095 ' 00:09:34.095 02:54:04 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.095 02:54:04 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.095 02:54:04 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.095 02:54:04 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.095 02:54:04 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.095 02:54:04 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:34.095 02:54:04 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:34.095 02:54:04 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:34.095 02:54:04 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.095 02:54:04 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:34.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.611 Waiting for block devices as requested 00:09:34.611 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.611 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.611 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.868 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.163 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.163 02:54:10 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:40.163 02:54:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.163 02:54:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:40.163 02:54:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.163 02:54:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:40.163 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.164 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:40.165 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.166 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.167 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.168 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:40.169 02:54:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:40.169 02:54:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.170 02:54:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:40.170 02:54:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.170 02:54:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.170 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:40.171 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:40.172 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:40.173 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:40.174 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.175 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:40.176 02:54:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.176 02:54:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:40.176 02:54:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.176 02:54:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.176 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:40.177 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.178 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.179 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.180 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.181 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.182 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:40.183 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.184 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.185 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:40.186 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:40.187 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.188 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:40.189 02:54:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.189 02:54:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:40.189 02:54:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.189 02:54:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:40.189 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.190 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:40.191 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:40.192 02:54:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:40.192 02:54:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:40.193 02:54:10 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:40.193 02:54:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:40.193 02:54:10 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:40.193 02:54:10 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.336 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.336 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.336 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.336 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.336 02:54:12 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.336 02:54:12 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.336 02:54:12 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.336 02:54:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.336 ************************************ 00:09:41.336 START TEST nvme_flexible_data_placement 00:09:41.336 ************************************ 00:09:41.336 02:54:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.598 Initializing NVMe Controllers 00:09:41.598 Attaching to 0000:00:13.0 00:09:41.598 Controller supports FDP Attached to 0000:00:13.0 00:09:41.598 Namespace ID: 1 Endurance Group ID: 1 00:09:41.598 Initialization complete. 00:09:41.598 00:09:41.598 ================================== 00:09:41.598 == FDP tests for Namespace: #01 == 00:09:41.598 ================================== 00:09:41.598 00:09:41.598 Get Feature: FDP: 00:09:41.598 ================= 00:09:41.598 Enabled: Yes 00:09:41.598 FDP configuration Index: 0 00:09:41.598 00:09:41.598 FDP configurations log page 00:09:41.598 =========================== 00:09:41.598 Number of FDP configurations: 1 00:09:41.598 Version: 0 00:09:41.598 Size: 112 00:09:41.598 FDP Configuration Descriptor: 0 00:09:41.598 Descriptor Size: 96 00:09:41.598 Reclaim Group Identifier format: 2 00:09:41.598 FDP Volatile Write Cache: Not Present 00:09:41.598 FDP Configuration: Valid 00:09:41.598 Vendor Specific Size: 0 00:09:41.598 Number of Reclaim Groups: 2 00:09:41.598 Number of Recalim Unit Handles: 8 00:09:41.598 Max Placement Identifiers: 128 00:09:41.598 Number of Namespaces Suppprted: 256 00:09:41.598 Reclaim unit Nominal Size: 6000000 bytes 00:09:41.598 Estimated Reclaim Unit Time Limit: Not Reported 00:09:41.598 RUH Desc #000: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #001: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #002: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #003: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #004: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #005: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #006: RUH Type: Initially Isolated 00:09:41.598 RUH Desc #007: RUH Type: Initially Isolated 00:09:41.598 00:09:41.598 FDP reclaim unit handle usage log page 00:09:41.598 ====================================== 00:09:41.598 Number of Reclaim Unit Handles: 8 00:09:41.598 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:41.598 RUH Usage Desc #001: RUH Attributes: Unused 00:09:41.598 RUH Usage Desc #002: RUH Attributes: Unused 00:09:41.598 RUH Usage Desc #003: RUH Attributes: Unused 00:09:41.598 RUH Usage Desc #004: RUH Attributes: Unused 00:09:41.598 RUH Usage Desc #005: RUH Attributes: Unused 00:09:41.598 RUH Usage Desc #006: RUH Attributes: Unused 00:09:41.598 RUH Usage Desc #007: RUH Attributes: Unused 00:09:41.598 00:09:41.598 FDP statistics log page 00:09:41.598 ======================= 00:09:41.598 Host bytes with metadata written: 935100416 00:09:41.598 Media bytes with metadata written: 935215104 00:09:41.598 Media bytes erased: 0 00:09:41.598 00:09:41.598 FDP Reclaim unit handle status 00:09:41.598 ============================== 00:09:41.598 Number of RUHS descriptors: 2 00:09:41.598 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004438 00:09:41.598 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:41.598 00:09:41.598 FDP write on placement id: 0 success 00:09:41.598 00:09:41.598 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:41.598 00:09:41.598 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:41.598 00:09:41.598 Get Feature: FDP Events for Placement handle: #0 00:09:41.598 ======================== 00:09:41.598 Number of FDP Events: 6 00:09:41.598 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:41.598 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:41.598 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:41.598 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:41.598 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:41.598 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:41.598 00:09:41.598 FDP events log page 00:09:41.598 =================== 00:09:41.598 Number of FDP events: 1 00:09:41.598 FDP Event #0: 00:09:41.598 Event Type: RU Not Written to Capacity 00:09:41.598 Placement Identifier: Valid 00:09:41.599 NSID: Valid 00:09:41.599 Location: Valid 00:09:41.599 Placement Identifier: 0 00:09:41.599 Event Timestamp: 7 00:09:41.599 Namespace Identifier: 1 00:09:41.599 Reclaim Group Identifier: 0 00:09:41.599 Reclaim Unit Handle Identifier: 0 00:09:41.599 00:09:41.599 FDP test passed 00:09:41.599 00:09:41.599 real 0m0.253s 00:09:41.599 user 0m0.081s 00:09:41.599 sys 0m0.069s 00:09:41.599 ************************************ 00:09:41.599 END TEST nvme_flexible_data_placement 00:09:41.599 ************************************ 00:09:41.599 02:54:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.599 02:54:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:41.860 00:09:41.860 real 0m7.750s 00:09:41.860 user 0m1.134s 00:09:41.860 sys 0m1.434s 00:09:41.860 ************************************ 00:09:41.860 END TEST nvme_fdp 00:09:41.860 ************************************ 00:09:41.861 02:54:12 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.861 02:54:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.861 02:54:12 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:41.861 02:54:12 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:41.861 02:54:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.861 02:54:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.861 02:54:12 -- common/autotest_common.sh@10 -- # set +x 00:09:41.861 ************************************ 00:09:41.861 START TEST nvme_rpc 00:09:41.861 ************************************ 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:41.861 * Looking for test storage... 00:09:41.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.861 02:54:12 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:41.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.861 --rc genhtml_branch_coverage=1 00:09:41.861 --rc genhtml_function_coverage=1 00:09:41.861 --rc genhtml_legend=1 00:09:41.861 --rc geninfo_all_blocks=1 00:09:41.861 --rc geninfo_unexecuted_blocks=1 00:09:41.861 00:09:41.861 ' 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:41.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.861 --rc genhtml_branch_coverage=1 00:09:41.861 --rc genhtml_function_coverage=1 00:09:41.861 --rc genhtml_legend=1 00:09:41.861 --rc geninfo_all_blocks=1 00:09:41.861 --rc geninfo_unexecuted_blocks=1 00:09:41.861 00:09:41.861 ' 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:41.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.861 --rc genhtml_branch_coverage=1 00:09:41.861 --rc genhtml_function_coverage=1 00:09:41.861 --rc genhtml_legend=1 00:09:41.861 --rc geninfo_all_blocks=1 00:09:41.861 --rc geninfo_unexecuted_blocks=1 00:09:41.861 00:09:41.861 ' 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:41.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.861 --rc genhtml_branch_coverage=1 00:09:41.861 --rc genhtml_function_coverage=1 00:09:41.861 --rc genhtml_legend=1 00:09:41.861 --rc geninfo_all_blocks=1 00:09:41.861 --rc geninfo_unexecuted_blocks=1 00:09:41.861 00:09:41.861 ' 00:09:41.861 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.861 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.861 02:54:12 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:42.122 02:54:12 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:42.122 02:54:12 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.122 02:54:12 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:42.122 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:42.122 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65710 00:09:42.122 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:42.122 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65710 00:09:42.122 02:54:12 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65710 ']' 00:09:42.122 02:54:12 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.122 02:54:12 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.123 02:54:12 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.123 02:54:12 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.123 02:54:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.123 02:54:12 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.123 [2024-12-05 02:54:12.843434] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:42.123 [2024-12-05 02:54:12.843571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65710 ] 00:09:42.384 [2024-12-05 02:54:13.007886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.384 [2024-12-05 02:54:13.136888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.384 [2024-12-05 02:54:13.136975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.327 02:54:13 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.327 02:54:13 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.327 02:54:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:43.589 Nvme0n1 00:09:43.589 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:43.589 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:43.589 request: 00:09:43.589 { 00:09:43.589 "bdev_name": "Nvme0n1", 00:09:43.589 "filename": "non_existing_file", 00:09:43.589 "method": "bdev_nvme_apply_firmware", 00:09:43.589 "req_id": 1 00:09:43.589 } 00:09:43.589 Got JSON-RPC error response 00:09:43.589 response: 00:09:43.589 { 00:09:43.589 "code": -32603, 00:09:43.589 "message": "open file failed." 00:09:43.589 } 00:09:43.851 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:43.851 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:43.851 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:43.851 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:43.851 02:54:14 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65710 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65710 ']' 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65710 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65710 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.851 killing process with pid 65710 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65710' 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65710 00:09:43.851 02:54:14 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65710 00:09:45.766 00:09:45.766 real 0m3.928s 00:09:45.766 user 0m7.181s 00:09:45.766 sys 0m0.748s 00:09:45.766 02:54:16 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.766 02:54:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 ************************************ 00:09:45.766 END TEST nvme_rpc 00:09:45.766 ************************************ 00:09:45.766 02:54:16 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.766 02:54:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.766 02:54:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.766 02:54:16 -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 ************************************ 00:09:45.766 START TEST nvme_rpc_timeouts 00:09:45.766 ************************************ 00:09:45.766 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.766 * Looking for test storage... 00:09:45.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.766 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.766 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.766 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.028 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.028 02:54:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:46.028 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.028 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.028 --rc genhtml_branch_coverage=1 00:09:46.028 --rc genhtml_function_coverage=1 00:09:46.028 --rc genhtml_legend=1 00:09:46.028 --rc geninfo_all_blocks=1 00:09:46.028 --rc geninfo_unexecuted_blocks=1 00:09:46.028 00:09:46.028 ' 00:09:46.028 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.028 --rc genhtml_branch_coverage=1 00:09:46.028 --rc genhtml_function_coverage=1 00:09:46.028 --rc genhtml_legend=1 00:09:46.028 --rc geninfo_all_blocks=1 00:09:46.028 --rc geninfo_unexecuted_blocks=1 00:09:46.028 00:09:46.028 ' 00:09:46.028 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.028 --rc genhtml_branch_coverage=1 00:09:46.028 --rc genhtml_function_coverage=1 00:09:46.028 --rc genhtml_legend=1 00:09:46.028 --rc geninfo_all_blocks=1 00:09:46.028 --rc geninfo_unexecuted_blocks=1 00:09:46.028 00:09:46.028 ' 00:09:46.028 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.028 --rc genhtml_branch_coverage=1 00:09:46.028 --rc genhtml_function_coverage=1 00:09:46.028 --rc genhtml_legend=1 00:09:46.028 --rc geninfo_all_blocks=1 00:09:46.028 --rc geninfo_unexecuted_blocks=1 00:09:46.028 00:09:46.028 ' 00:09:46.028 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.028 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65786 00:09:46.028 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65786 00:09:46.028 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65818 00:09:46.029 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:46.029 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65818 00:09:46.029 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65818 ']' 00:09:46.029 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.029 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.029 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.029 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.029 02:54:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.029 02:54:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:46.029 [2024-12-05 02:54:16.773035] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:46.029 [2024-12-05 02:54:16.773210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65818 ] 00:09:46.291 [2024-12-05 02:54:16.938398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.291 [2024-12-05 02:54:17.079162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.291 [2024-12-05 02:54:17.079175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.236 02:54:17 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.236 Checking default timeout settings: 00:09:47.236 02:54:17 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:47.236 02:54:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:47.236 02:54:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:47.498 Making settings changes with rpc: 00:09:47.498 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:47.498 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:47.760 Check default vs. modified settings: 00:09:47.760 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:47.760 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65786 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65786 00:09:48.022 Setting action_on_timeout is changed as expected. 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65786 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65786 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:48.022 Setting timeout_us is changed as expected. 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65786 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65786 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:48.022 Setting timeout_admin_us is changed as expected. 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65786 /tmp/settings_modified_65786 00:09:48.022 02:54:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65818 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65818 ']' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65818 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65818 00:09:48.022 killing process with pid 65818 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65818' 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65818 00:09:48.022 02:54:18 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65818 00:09:49.402 RPC TIMEOUT SETTING TEST PASSED. 00:09:49.402 02:54:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:49.402 00:09:49.402 real 0m3.625s 00:09:49.402 user 0m6.812s 00:09:49.402 sys 0m0.722s 00:09:49.402 02:54:20 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.402 02:54:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:49.402 ************************************ 00:09:49.402 END TEST nvme_rpc_timeouts 00:09:49.402 ************************************ 00:09:49.402 02:54:20 -- spdk/autotest.sh@239 -- # uname -s 00:09:49.402 02:54:20 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:49.402 02:54:20 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:49.402 02:54:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.402 02:54:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.402 02:54:20 -- common/autotest_common.sh@10 -- # set +x 00:09:49.402 ************************************ 00:09:49.402 START TEST sw_hotplug 00:09:49.402 ************************************ 00:09:49.402 02:54:20 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:49.662 * Looking for test storage... 00:09:49.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.662 02:54:20 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 02:54:20 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.184 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.184 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.184 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.184 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.184 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:50.184 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:50.184 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:50.184 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:50.184 02:54:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:50.185 02:54:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:50.185 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:50.185 02:54:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:50.185 02:54:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:50.185 02:54:20 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:50.185 02:54:20 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:50.185 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:50.185 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:50.185 02:54:20 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.703 Waiting for block devices as requested 00:09:50.703 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.703 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.964 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.964 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.259 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:56.259 02:54:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:56.260 02:54:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.521 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:56.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.521 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:56.782 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:57.041 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.041 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.041 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:57.041 02:54:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66681 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:57.303 02:54:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:57.303 02:54:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:57.303 02:54:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:57.303 02:54:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:57.303 02:54:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:57.303 02:54:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:57.303 Initializing NVMe Controllers 00:09:57.303 Attaching to 0000:00:10.0 00:09:57.303 Attaching to 0000:00:11.0 00:09:57.303 Attached to 0000:00:11.0 00:09:57.681 Attached to 0000:00:10.0 00:09:57.681 Initialization complete. Starting I/O... 00:09:57.681 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:57.681 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:57.681 00:09:58.647 QEMU NVMe Ctrl (12341 ): 2040 I/Os completed (+2040) 00:09:58.647 QEMU NVMe Ctrl (12340 ): 2044 I/Os completed (+2044) 00:09:58.647 00:09:59.592 QEMU NVMe Ctrl (12341 ): 4552 I/Os completed (+2512) 00:09:59.592 QEMU NVMe Ctrl (12340 ): 4560 I/Os completed (+2516) 00:09:59.592 00:10:00.536 QEMU NVMe Ctrl (12341 ): 7140 I/Os completed (+2588) 00:10:00.536 QEMU NVMe Ctrl (12340 ): 7148 I/Os completed (+2588) 00:10:00.536 00:10:01.471 QEMU NVMe Ctrl (12341 ): 10496 I/Os completed (+3356) 00:10:01.471 QEMU NVMe Ctrl (12340 ): 10505 I/Os completed (+3357) 00:10:01.471 00:10:02.406 QEMU NVMe Ctrl (12341 ): 14416 I/Os completed (+3920) 00:10:02.406 QEMU NVMe Ctrl (12340 ): 14425 I/Os completed (+3920) 00:10:02.406 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:03.341 [2024-12-05 02:54:33.938482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:03.341 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:03.341 [2024-12-05 02:54:33.939647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.939699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.939714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.939730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:03.341 [2024-12-05 02:54:33.941217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.941255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.941267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.941279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:03.341 [2024-12-05 02:54:33.961505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:03.341 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:03.341 [2024-12-05 02:54:33.962382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.962415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.962432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.962446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:03.341 [2024-12-05 02:54:33.963901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.963936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.963948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 [2024-12-05 02:54:33.963959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:03.341 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:03.341 EAL: Scan for (pci) bus failed. 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:03.341 02:54:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:03.341 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:03.341 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:03.341 Attaching to 0000:00:10.0 00:10:03.341 Attached to 0000:00:10.0 00:10:03.598 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:03.598 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:03.598 02:54:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:03.598 Attaching to 0000:00:11.0 00:10:03.598 Attached to 0000:00:11.0 00:10:04.530 QEMU NVMe Ctrl (12340 ): 3828 I/Os completed (+3828) 00:10:04.530 QEMU NVMe Ctrl (12341 ): 3488 I/Os completed (+3488) 00:10:04.530 00:10:05.461 QEMU NVMe Ctrl (12340 ): 7744 I/Os completed (+3916) 00:10:05.461 QEMU NVMe Ctrl (12341 ): 7408 I/Os completed (+3920) 00:10:05.461 00:10:06.395 QEMU NVMe Ctrl (12340 ): 11514 I/Os completed (+3770) 00:10:06.395 QEMU NVMe Ctrl (12341 ): 11136 I/Os completed (+3728) 00:10:06.395 00:10:07.329 QEMU NVMe Ctrl (12340 ): 15133 I/Os completed (+3619) 00:10:07.329 QEMU NVMe Ctrl (12341 ): 14741 I/Os completed (+3605) 00:10:07.329 00:10:08.707 QEMU NVMe Ctrl (12340 ): 18270 I/Os completed (+3137) 00:10:08.707 QEMU NVMe Ctrl (12341 ): 17891 I/Os completed (+3150) 00:10:08.707 00:10:09.652 QEMU NVMe Ctrl (12340 ): 21094 I/Os completed (+2824) 00:10:09.652 QEMU NVMe Ctrl (12341 ): 20715 I/Os completed (+2824) 00:10:09.652 00:10:10.591 QEMU NVMe Ctrl (12340 ): 24185 I/Os completed (+3091) 00:10:10.591 QEMU NVMe Ctrl (12341 ): 23802 I/Os completed (+3087) 00:10:10.591 00:10:11.527 QEMU NVMe Ctrl (12340 ): 27551 I/Os completed (+3366) 00:10:11.527 QEMU NVMe Ctrl (12341 ): 27024 I/Os completed (+3222) 00:10:11.527 00:10:12.461 QEMU NVMe Ctrl (12340 ): 30779 I/Os completed (+3228) 00:10:12.461 QEMU NVMe Ctrl (12341 ): 30260 I/Os completed (+3236) 00:10:12.461 00:10:13.395 QEMU NVMe Ctrl (12340 ): 34039 I/Os completed (+3260) 00:10:13.395 QEMU NVMe Ctrl (12341 ): 33524 I/Os completed (+3264) 00:10:13.395 00:10:14.330 QEMU NVMe Ctrl (12340 ): 37647 I/Os completed (+3608) 00:10:14.330 QEMU NVMe Ctrl (12341 ): 37129 I/Os completed (+3605) 00:10:14.330 00:10:15.708 QEMU NVMe Ctrl (12340 ): 41431 I/Os completed (+3784) 00:10:15.708 QEMU NVMe Ctrl (12341 ): 40898 I/Os completed (+3769) 00:10:15.708 00:10:15.708 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:15.708 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:15.708 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.708 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.708 [2024-12-05 02:54:46.258042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:15.708 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:15.708 [2024-12-05 02:54:46.259067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.259195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.259226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.259287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.708 [2024-12-05 02:54:46.260916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.261024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.261053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.261112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.708 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.708 [2024-12-05 02:54:46.279984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:15.708 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:15.708 [2024-12-05 02:54:46.280947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.708 [2024-12-05 02:54:46.281049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 [2024-12-05 02:54:46.281098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 [2024-12-05 02:54:46.281158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.709 [2024-12-05 02:54:46.282567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 [2024-12-05 02:54:46.282649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 [2024-12-05 02:54:46.282666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 [2024-12-05 02:54:46.282678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:15.709 Attaching to 0000:00:10.0 00:10:15.709 Attached to 0000:00:10.0 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.709 02:54:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:15.709 Attaching to 0000:00:11.0 00:10:15.709 Attached to 0000:00:11.0 00:10:16.644 QEMU NVMe Ctrl (12340 ): 2624 I/Os completed (+2624) 00:10:16.644 QEMU NVMe Ctrl (12341 ): 2339 I/Os completed (+2339) 00:10:16.644 00:10:17.587 QEMU NVMe Ctrl (12340 ): 6373 I/Os completed (+3749) 00:10:17.587 QEMU NVMe Ctrl (12341 ): 6077 I/Os completed (+3738) 00:10:17.587 00:10:18.529 QEMU NVMe Ctrl (12340 ): 9988 I/Os completed (+3615) 00:10:18.529 QEMU NVMe Ctrl (12341 ): 9685 I/Os completed (+3608) 00:10:18.529 00:10:19.470 QEMU NVMe Ctrl (12340 ): 13576 I/Os completed (+3588) 00:10:19.470 QEMU NVMe Ctrl (12341 ): 13270 I/Os completed (+3585) 00:10:19.470 00:10:20.413 QEMU NVMe Ctrl (12340 ): 16854 I/Os completed (+3278) 00:10:20.413 QEMU NVMe Ctrl (12341 ): 16552 I/Os completed (+3282) 00:10:20.413 00:10:21.357 QEMU NVMe Ctrl (12340 ): 19574 I/Os completed (+2720) 00:10:21.357 QEMU NVMe Ctrl (12341 ): 19272 I/Os completed (+2720) 00:10:21.357 00:10:22.743 QEMU NVMe Ctrl (12340 ): 22582 I/Os completed (+3008) 00:10:22.743 QEMU NVMe Ctrl (12341 ): 22292 I/Os completed (+3020) 00:10:22.743 00:10:23.316 QEMU NVMe Ctrl (12340 ): 25136 I/Os completed (+2554) 00:10:23.316 QEMU NVMe Ctrl (12341 ): 24887 I/Os completed (+2595) 00:10:23.316 00:10:24.723 QEMU NVMe Ctrl (12340 ): 27756 I/Os completed (+2620) 00:10:24.723 QEMU NVMe Ctrl (12341 ): 27526 I/Os completed (+2639) 00:10:24.723 00:10:25.665 QEMU NVMe Ctrl (12340 ): 30344 I/Os completed (+2588) 00:10:25.665 QEMU NVMe Ctrl (12341 ): 30129 I/Os completed (+2603) 00:10:25.665 00:10:26.618 QEMU NVMe Ctrl (12340 ): 33056 I/Os completed (+2712) 00:10:26.618 QEMU NVMe Ctrl (12341 ): 32867 I/Os completed (+2738) 00:10:26.618 00:10:27.556 QEMU NVMe Ctrl (12340 ): 35892 I/Os completed (+2836) 00:10:27.556 QEMU NVMe Ctrl (12341 ): 35703 I/Os completed (+2836) 00:10:27.556 00:10:27.815 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.815 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.815 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.815 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.815 [2024-12-05 02:54:58.508835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:27.815 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:27.815 [2024-12-05 02:54:58.509827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.815 [2024-12-05 02:54:58.509867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.815 [2024-12-05 02:54:58.509882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.815 [2024-12-05 02:54:58.509896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.816 [2024-12-05 02:54:58.511438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.511536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.511666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.511878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.816 [2024-12-05 02:54:58.532827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:27.816 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:27.816 [2024-12-05 02:54:58.533864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.534004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.534039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.534142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.816 [2024-12-05 02:54:58.535562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.535656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.535687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 [2024-12-05 02:54:58.535710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:27.816 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:27.816 EAL: Scan for (pci) bus failed. 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.816 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:28.076 Attaching to 0000:00:10.0 00:10:28.076 Attached to 0000:00:10.0 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.076 02:54:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:28.076 Attaching to 0000:00:11.0 00:10:28.076 Attached to 0000:00:11.0 00:10:28.076 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:28.076 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:28.076 [2024-12-05 02:54:58.840115] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:40.350 02:55:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:40.350 02:55:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.350 02:55:10 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.90 00:10:40.350 02:55:10 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.90 00:10:40.350 02:55:10 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:40.350 02:55:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.90 00:10:40.350 02:55:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.90 2 00:10:40.350 remove_attach_helper took 42.90s to complete (handling 2 nvme drive(s)) 02:55:10 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66681 00:10:46.932 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66681) - No such process 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66681 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67229 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67229 00:10:46.932 02:55:16 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:46.932 02:55:16 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67229 ']' 00:10:46.932 02:55:16 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.932 02:55:16 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.932 02:55:16 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.932 02:55:16 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.932 02:55:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.932 [2024-12-05 02:55:16.935030] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:46.932 [2024-12-05 02:55:16.935202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67229 ] 00:10:46.932 [2024-12-05 02:55:17.099711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.932 [2024-12-05 02:55:17.220675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:47.193 02:55:17 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:47.193 02:55:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.771 02:55:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.771 02:55:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.771 02:55:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.771 02:55:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.771 [2024-12-05 02:55:24.022408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:53.771 [2024-12-05 02:55:24.023626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.023662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.023675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.023692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.023700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.023709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.023716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.023724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.023730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.023741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.023748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.023756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.422392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:53.771 [2024-12-05 02:55:24.423588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.423618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.423629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.423642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.423651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.423658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.423666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.423673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.423681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 [2024-12-05 02:55:24.423688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.771 [2024-12-05 02:55:24.423696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.771 [2024-12-05 02:55:24.423702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.771 02:55:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.771 02:55:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.771 02:55:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:53.771 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.032 02:55:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.279 02:55:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.279 02:55:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 02:55:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.279 02:55:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.279 02:55:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 02:55:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.279 02:55:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.279 [2024-12-05 02:55:36.922590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.279 [2024-12-05 02:55:36.923847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.279 [2024-12-05 02:55:36.923950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.279 [2024-12-05 02:55:36.924006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.279 [2024-12-05 02:55:36.924186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.279 [2024-12-05 02:55:36.924208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.279 [2024-12-05 02:55:36.924233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.279 [2024-12-05 02:55:36.924297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.279 [2024-12-05 02:55:36.924319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.279 [2024-12-05 02:55:36.924343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.279 [2024-12-05 02:55:36.924367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.279 [2024-12-05 02:55:36.924422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.279 [2024-12-05 02:55:36.924451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.538 [2024-12-05 02:55:37.322587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.538 [2024-12-05 02:55:37.323812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.538 [2024-12-05 02:55:37.323919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.538 [2024-12-05 02:55:37.323985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.538 [2024-12-05 02:55:37.324046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.538 [2024-12-05 02:55:37.324067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.538 [2024-12-05 02:55:37.324128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.538 [2024-12-05 02:55:37.324158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.538 [2024-12-05 02:55:37.324205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.538 [2024-12-05 02:55:37.324233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.538 [2024-12-05 02:55:37.324284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.538 [2024-12-05 02:55:37.324304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.538 [2024-12-05 02:55:37.324328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.798 02:55:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.798 02:55:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.798 02:55:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.798 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.058 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.058 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.058 02:55:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.280 02:55:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.280 02:55:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.280 02:55:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.280 02:55:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.280 02:55:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.280 02:55:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:19.280 02:55:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.280 [2024-12-05 02:55:49.822781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.280 [2024-12-05 02:55:49.823987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.280 [2024-12-05 02:55:49.824022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.280 [2024-12-05 02:55:49.824032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.280 [2024-12-05 02:55:49.824048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.280 [2024-12-05 02:55:49.824055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.280 [2024-12-05 02:55:49.824065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.280 [2024-12-05 02:55:49.824081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.280 [2024-12-05 02:55:49.824090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.280 [2024-12-05 02:55:49.824097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.280 [2024-12-05 02:55:49.824105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.280 [2024-12-05 02:55:49.824112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.280 [2024-12-05 02:55:49.824120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.541 [2024-12-05 02:55:50.222789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.541 [2024-12-05 02:55:50.224033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.541 [2024-12-05 02:55:50.224068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.541 [2024-12-05 02:55:50.224095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.541 [2024-12-05 02:55:50.224110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.541 [2024-12-05 02:55:50.224119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.541 [2024-12-05 02:55:50.224126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.541 [2024-12-05 02:55:50.224135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.541 [2024-12-05 02:55:50.224142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.541 [2024-12-05 02:55:50.224152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.541 [2024-12-05 02:55:50.224159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.541 [2024-12-05 02:55:50.224176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.541 [2024-12-05 02:55:50.224183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.541 02:55:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.541 02:55:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.541 02:55:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.541 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.801 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.801 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.801 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.801 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.801 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.801 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.802 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.802 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.802 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.802 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.802 02:55:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.67 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.67 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:11:32.021 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:32.021 02:56:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:32.021 02:56:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.635 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.636 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.636 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.636 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.636 02:56:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.636 02:56:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.636 02:56:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.636 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.636 02:56:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.636 [2024-12-05 02:56:08.720107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.636 [2024-12-05 02:56:08.721013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:08.721051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:08.721062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:08.721090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:08.721099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:08.721107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:08.721114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:08.721123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:08.721130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:08.721138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:08.721145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:08.721155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:09.120100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.636 [2024-12-05 02:56:09.120980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:09.121009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:09.121020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:09.121033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:09.121042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:09.121048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:09.121057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:09.121063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:09.121084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 [2024-12-05 02:56:09.121092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.636 [2024-12-05 02:56:09.121100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.636 [2024-12-05 02:56:09.121107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.636 02:56:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.636 02:56:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.636 02:56:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.636 02:56:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.874 02:56:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.874 02:56:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.874 02:56:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:50.874 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.875 [2024-12-05 02:56:21.520321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:50.875 [2024-12-05 02:56:21.521477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.875 [2024-12-05 02:56:21.521583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.875 [2024-12-05 02:56:21.521650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.875 [2024-12-05 02:56:21.521770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.875 [2024-12-05 02:56:21.521788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.875 [2024-12-05 02:56:21.521915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.875 [2024-12-05 02:56:21.521942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.875 [2024-12-05 02:56:21.521960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.875 [2024-12-05 02:56:21.521983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.875 [2024-12-05 02:56:21.522007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.875 [2024-12-05 02:56:21.522054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.875 [2024-12-05 02:56:21.522100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.875 02:56:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.875 02:56:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.875 02:56:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:50.875 02:56:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.136 [2024-12-05 02:56:21.920319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:51.136 [2024-12-05 02:56:21.921281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.136 [2024-12-05 02:56:21.921381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.136 [2024-12-05 02:56:21.921444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.136 [2024-12-05 02:56:21.921475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.136 [2024-12-05 02:56:21.921526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.136 [2024-12-05 02:56:21.921552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.136 [2024-12-05 02:56:21.921605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.136 [2024-12-05 02:56:21.921624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.136 [2024-12-05 02:56:21.921648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.136 [2024-12-05 02:56:21.921673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.136 [2024-12-05 02:56:21.921719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.136 [2024-12-05 02:56:21.921745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.397 02:56:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.397 02:56:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.397 02:56:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.397 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.659 02:56:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.892 02:56:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.892 02:56:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.892 02:56:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:03.892 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:03.892 [2024-12-05 02:56:34.420518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:03.892 [2024-12-05 02:56:34.421484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.892 [2024-12-05 02:56:34.421512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.893 [2024-12-05 02:56:34.421522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.893 [2024-12-05 02:56:34.421539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.893 [2024-12-05 02:56:34.421547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.893 [2024-12-05 02:56:34.421555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.893 [2024-12-05 02:56:34.421562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.893 [2024-12-05 02:56:34.421573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.893 [2024-12-05 02:56:34.421580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.893 [2024-12-05 02:56:34.421588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.893 [2024-12-05 02:56:34.421594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.893 [2024-12-05 02:56:34.421602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.893 02:56:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.893 02:56:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.893 02:56:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:03.893 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:04.154 [2024-12-05 02:56:34.920514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:04.154 [2024-12-05 02:56:34.921401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.154 [2024-12-05 02:56:34.921520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.154 [2024-12-05 02:56:34.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.154 [2024-12-05 02:56:34.921550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.154 [2024-12-05 02:56:34.921559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.154 [2024-12-05 02:56:34.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.154 [2024-12-05 02:56:34.921576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.154 [2024-12-05 02:56:34.921582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.154 [2024-12-05 02:56:34.921590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.154 [2024-12-05 02:56:34.921597] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.154 [2024-12-05 02:56:34.921608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.154 [2024-12-05 02:56:34.921614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.154 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:04.154 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.154 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.154 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.154 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.154 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.154 02:56:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.155 02:56:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.155 02:56:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.415 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:04.415 02:56:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.415 02:56:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:16.704 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:16.704 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:16.704 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.64 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.64 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:12:16.705 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:16.705 02:56:47 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67229 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67229 ']' 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67229 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67229 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67229' 00:12:16.705 killing process with pid 67229 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67229 00:12:16.705 02:56:47 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67229 00:12:17.648 02:56:48 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.482 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.482 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.482 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.743 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.743 00:12:18.743 real 2m29.161s 00:12:18.743 user 1m50.811s 00:12:18.743 sys 0m16.846s 00:12:18.743 02:56:49 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.743 02:56:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.743 ************************************ 00:12:18.743 END TEST sw_hotplug 00:12:18.743 ************************************ 00:12:18.743 02:56:49 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:18.743 02:56:49 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:18.743 02:56:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.744 02:56:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.744 02:56:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.744 ************************************ 00:12:18.744 START TEST nvme_xnvme 00:12:18.744 ************************************ 00:12:18.744 02:56:49 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:18.744 * Looking for test storage... 00:12:18.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:18.744 02:56:49 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:18.744 02:56:49 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:18.744 02:56:49 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.009 02:56:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.009 --rc genhtml_branch_coverage=1 00:12:19.009 --rc genhtml_function_coverage=1 00:12:19.009 --rc genhtml_legend=1 00:12:19.009 --rc geninfo_all_blocks=1 00:12:19.009 --rc geninfo_unexecuted_blocks=1 00:12:19.009 00:12:19.009 ' 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.009 --rc genhtml_branch_coverage=1 00:12:19.009 --rc genhtml_function_coverage=1 00:12:19.009 --rc genhtml_legend=1 00:12:19.009 --rc geninfo_all_blocks=1 00:12:19.009 --rc geninfo_unexecuted_blocks=1 00:12:19.009 00:12:19.009 ' 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.009 --rc genhtml_branch_coverage=1 00:12:19.009 --rc genhtml_function_coverage=1 00:12:19.009 --rc genhtml_legend=1 00:12:19.009 --rc geninfo_all_blocks=1 00:12:19.009 --rc geninfo_unexecuted_blocks=1 00:12:19.009 00:12:19.009 ' 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.009 --rc genhtml_branch_coverage=1 00:12:19.009 --rc genhtml_function_coverage=1 00:12:19.009 --rc genhtml_legend=1 00:12:19.009 --rc geninfo_all_blocks=1 00:12:19.009 --rc geninfo_unexecuted_blocks=1 00:12:19.009 00:12:19.009 ' 00:12:19.009 02:56:49 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:19.009 02:56:49 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:19.009 02:56:49 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:19.009 02:56:49 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:19.010 02:56:49 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:19.010 02:56:49 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:19.010 02:56:49 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:19.010 #define SPDK_CONFIG_H 00:12:19.010 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:19.010 #define SPDK_CONFIG_APPS 1 00:12:19.010 #define SPDK_CONFIG_ARCH native 00:12:19.010 #define SPDK_CONFIG_ASAN 1 00:12:19.010 #undef SPDK_CONFIG_AVAHI 00:12:19.010 #undef SPDK_CONFIG_CET 00:12:19.010 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:19.010 #define SPDK_CONFIG_COVERAGE 1 00:12:19.010 #define SPDK_CONFIG_CROSS_PREFIX 00:12:19.010 #undef SPDK_CONFIG_CRYPTO 00:12:19.010 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:19.010 #undef SPDK_CONFIG_CUSTOMOCF 00:12:19.010 #undef SPDK_CONFIG_DAOS 00:12:19.010 #define SPDK_CONFIG_DAOS_DIR 00:12:19.010 #define SPDK_CONFIG_DEBUG 1 00:12:19.010 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:19.010 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:19.010 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:19.010 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:19.010 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:19.010 #undef SPDK_CONFIG_DPDK_UADK 00:12:19.010 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.010 #define SPDK_CONFIG_EXAMPLES 1 00:12:19.010 #undef SPDK_CONFIG_FC 00:12:19.010 #define SPDK_CONFIG_FC_PATH 00:12:19.010 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:19.010 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:19.010 #define SPDK_CONFIG_FSDEV 1 00:12:19.010 #undef SPDK_CONFIG_FUSE 00:12:19.010 #undef SPDK_CONFIG_FUZZER 00:12:19.010 #define SPDK_CONFIG_FUZZER_LIB 00:12:19.010 #undef SPDK_CONFIG_GOLANG 00:12:19.010 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:19.010 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:19.010 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:19.010 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:19.010 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:19.010 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:19.010 #undef SPDK_CONFIG_HAVE_LZ4 00:12:19.010 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:19.010 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:19.010 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:19.010 #define SPDK_CONFIG_IDXD 1 00:12:19.010 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:19.010 #undef SPDK_CONFIG_IPSEC_MB 00:12:19.010 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:19.010 #define SPDK_CONFIG_ISAL 1 00:12:19.010 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:19.010 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:19.010 #define SPDK_CONFIG_LIBDIR 00:12:19.010 #undef SPDK_CONFIG_LTO 00:12:19.010 #define SPDK_CONFIG_MAX_LCORES 128 00:12:19.010 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:19.010 #define SPDK_CONFIG_NVME_CUSE 1 00:12:19.010 #undef SPDK_CONFIG_OCF 00:12:19.010 #define SPDK_CONFIG_OCF_PATH 00:12:19.010 #define SPDK_CONFIG_OPENSSL_PATH 00:12:19.010 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:19.010 #define SPDK_CONFIG_PGO_DIR 00:12:19.010 #undef SPDK_CONFIG_PGO_USE 00:12:19.010 #define SPDK_CONFIG_PREFIX /usr/local 00:12:19.010 #undef SPDK_CONFIG_RAID5F 00:12:19.010 #undef SPDK_CONFIG_RBD 00:12:19.010 #define SPDK_CONFIG_RDMA 1 00:12:19.010 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:19.011 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:19.011 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:19.011 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:19.011 #define SPDK_CONFIG_SHARED 1 00:12:19.011 #undef SPDK_CONFIG_SMA 00:12:19.011 #define SPDK_CONFIG_TESTS 1 00:12:19.011 #undef SPDK_CONFIG_TSAN 00:12:19.011 #define SPDK_CONFIG_UBLK 1 00:12:19.011 #define SPDK_CONFIG_UBSAN 1 00:12:19.011 #undef SPDK_CONFIG_UNIT_TESTS 00:12:19.011 #undef SPDK_CONFIG_URING 00:12:19.011 #define SPDK_CONFIG_URING_PATH 00:12:19.011 #undef SPDK_CONFIG_URING_ZNS 00:12:19.011 #undef SPDK_CONFIG_USDT 00:12:19.011 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:19.011 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:19.011 #undef SPDK_CONFIG_VFIO_USER 00:12:19.011 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:19.011 #define SPDK_CONFIG_VHOST 1 00:12:19.011 #define SPDK_CONFIG_VIRTIO 1 00:12:19.011 #undef SPDK_CONFIG_VTUNE 00:12:19.011 #define SPDK_CONFIG_VTUNE_DIR 00:12:19.011 #define SPDK_CONFIG_WERROR 1 00:12:19.011 #define SPDK_CONFIG_WPDK_DIR 00:12:19.011 #define SPDK_CONFIG_XNVME 1 00:12:19.011 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:19.011 02:56:49 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.011 02:56:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.011 02:56:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.011 02:56:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.011 02:56:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.011 02:56:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.011 02:56:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.011 02:56:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.011 02:56:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.011 02:56:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:19.011 02:56:49 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:19.011 02:56:49 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68560 ]] 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68560 00:12:19.012 02:56:49 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.WRV05h 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.WRV05h/tests/xnvme /tmp/spdk.WRV05h 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13953171456 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5614977024 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13953171456 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5614977024 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98835116032 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=867663872 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:19.013 * Looking for test storage... 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13953171456 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.013 02:56:49 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.013 02:56:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.014 02:56:49 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.014 02:56:49 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.014 --rc genhtml_branch_coverage=1 00:12:19.014 --rc genhtml_function_coverage=1 00:12:19.014 --rc genhtml_legend=1 00:12:19.014 --rc geninfo_all_blocks=1 00:12:19.014 --rc geninfo_unexecuted_blocks=1 00:12:19.014 00:12:19.014 ' 00:12:19.014 02:56:49 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.014 --rc genhtml_branch_coverage=1 00:12:19.014 --rc genhtml_function_coverage=1 00:12:19.014 --rc genhtml_legend=1 00:12:19.014 --rc geninfo_all_blocks=1 00:12:19.014 --rc geninfo_unexecuted_blocks=1 00:12:19.014 00:12:19.014 ' 00:12:19.014 02:56:49 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.014 --rc genhtml_branch_coverage=1 00:12:19.014 --rc genhtml_function_coverage=1 00:12:19.014 --rc genhtml_legend=1 00:12:19.014 --rc geninfo_all_blocks=1 00:12:19.014 --rc geninfo_unexecuted_blocks=1 00:12:19.014 00:12:19.014 ' 00:12:19.014 02:56:49 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.014 --rc genhtml_branch_coverage=1 00:12:19.014 --rc genhtml_function_coverage=1 00:12:19.014 --rc genhtml_legend=1 00:12:19.014 --rc geninfo_all_blocks=1 00:12:19.014 --rc geninfo_unexecuted_blocks=1 00:12:19.014 00:12:19.014 ' 00:12:19.014 02:56:49 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.014 02:56:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.014 02:56:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.014 02:56:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.014 02:56:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.014 02:56:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.014 02:56:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:19.014 02:56:49 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:19.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.536 Waiting for block devices as requested 00:12:19.536 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.797 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.797 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.797 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.090 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:25.090 02:56:55 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:25.350 02:56:56 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:25.350 02:56:56 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:25.610 02:56:56 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:25.610 02:56:56 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:25.610 No valid GPT data, bailing 00:12:25.610 02:56:56 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:25.610 02:56:56 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:25.610 02:56:56 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:25.610 02:56:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:25.610 02:56:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.610 02:56:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.610 02:56:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.610 ************************************ 00:12:25.610 START TEST xnvme_rpc 00:12:25.610 ************************************ 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:25.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68958 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68958 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68958 ']' 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.610 02:56:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.610 [2024-12-05 02:56:56.445252] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:25.610 [2024-12-05 02:56:56.445397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68958 ] 00:12:25.871 [2024-12-05 02:56:56.610655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.132 [2024-12-05 02:56:56.729386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 xnvme_bdev 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.717 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68958 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68958 ']' 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68958 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68958 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.978 killing process with pid 68958 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68958' 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68958 00:12:26.978 02:56:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68958 00:12:28.893 ************************************ 00:12:28.893 00:12:28.893 real 0m2.906s 00:12:28.893 user 0m2.905s 00:12:28.893 sys 0m0.468s 00:12:28.893 02:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.893 02:56:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.893 END TEST xnvme_rpc 00:12:28.893 ************************************ 00:12:28.893 02:56:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:28.893 02:56:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.893 02:56:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.893 02:56:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.893 ************************************ 00:12:28.893 START TEST xnvme_bdevperf 00:12:28.893 ************************************ 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:28.893 02:56:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:28.893 { 00:12:28.893 "subsystems": [ 00:12:28.893 { 00:12:28.893 "subsystem": "bdev", 00:12:28.893 "config": [ 00:12:28.893 { 00:12:28.893 "params": { 00:12:28.893 "io_mechanism": "libaio", 00:12:28.893 "conserve_cpu": false, 00:12:28.893 "filename": "/dev/nvme0n1", 00:12:28.893 "name": "xnvme_bdev" 00:12:28.893 }, 00:12:28.893 "method": "bdev_xnvme_create" 00:12:28.893 }, 00:12:28.893 { 00:12:28.893 "method": "bdev_wait_for_examine" 00:12:28.893 } 00:12:28.893 ] 00:12:28.893 } 00:12:28.893 ] 00:12:28.893 } 00:12:28.893 [2024-12-05 02:56:59.405733] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:28.893 [2024-12-05 02:56:59.405865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69027 ] 00:12:28.893 [2024-12-05 02:56:59.566998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.893 [2024-12-05 02:56:59.684331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.154 Running I/O for 5 seconds... 00:12:31.482 25713.00 IOPS, 100.44 MiB/s [2024-12-05T02:57:03.270Z] 25545.00 IOPS, 99.79 MiB/s [2024-12-05T02:57:04.214Z] 25394.67 IOPS, 99.20 MiB/s [2024-12-05T02:57:05.159Z] 25183.75 IOPS, 98.37 MiB/s 00:12:34.315 Latency(us) 00:12:34.315 [2024-12-05T02:57:05.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.315 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:34.315 xnvme_bdev : 5.00 25215.48 98.50 0.00 0.00 2532.86 510.42 10284.11 00:12:34.315 [2024-12-05T02:57:05.159Z] =================================================================================================================== 00:12:34.315 [2024-12-05T02:57:05.159Z] Total : 25215.48 98.50 0.00 0.00 2532.86 510.42 10284.11 00:12:35.256 02:57:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:35.256 02:57:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:35.256 02:57:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:35.256 02:57:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:35.256 02:57:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:35.256 { 00:12:35.256 "subsystems": [ 00:12:35.256 { 00:12:35.256 "subsystem": "bdev", 00:12:35.256 "config": [ 00:12:35.256 { 00:12:35.256 "params": { 00:12:35.256 "io_mechanism": "libaio", 00:12:35.256 "conserve_cpu": false, 00:12:35.256 "filename": "/dev/nvme0n1", 00:12:35.256 "name": "xnvme_bdev" 00:12:35.256 }, 00:12:35.256 "method": "bdev_xnvme_create" 00:12:35.256 }, 00:12:35.256 { 00:12:35.256 "method": "bdev_wait_for_examine" 00:12:35.256 } 00:12:35.256 ] 00:12:35.256 } 00:12:35.256 ] 00:12:35.256 } 00:12:35.256 [2024-12-05 02:57:05.872144] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:35.256 [2024-12-05 02:57:05.872507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69103 ] 00:12:35.256 [2024-12-05 02:57:06.035381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.517 [2024-12-05 02:57:06.155530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.777 Running I/O for 5 seconds... 00:12:37.670 33982.00 IOPS, 132.74 MiB/s [2024-12-05T02:57:09.524Z] 33728.50 IOPS, 131.75 MiB/s [2024-12-05T02:57:10.469Z] 34141.00 IOPS, 133.36 MiB/s [2024-12-05T02:57:11.856Z] 34141.75 IOPS, 133.37 MiB/s 00:12:41.012 Latency(us) 00:12:41.012 [2024-12-05T02:57:11.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.012 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:41.012 xnvme_bdev : 5.00 34017.10 132.88 0.00 0.00 1876.63 381.24 6326.74 00:12:41.012 [2024-12-05T02:57:11.856Z] =================================================================================================================== 00:12:41.012 [2024-12-05T02:57:11.856Z] Total : 34017.10 132.88 0.00 0.00 1876.63 381.24 6326.74 00:12:41.585 00:12:41.585 real 0m12.939s 00:12:41.585 user 0m5.191s 00:12:41.586 sys 0m6.306s 00:12:41.586 02:57:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.586 02:57:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:41.586 ************************************ 00:12:41.586 END TEST xnvme_bdevperf 00:12:41.586 ************************************ 00:12:41.586 02:57:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:41.586 02:57:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:41.586 02:57:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.586 02:57:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:41.586 ************************************ 00:12:41.586 START TEST xnvme_fio_plugin 00:12:41.586 ************************************ 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:41.586 02:57:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.586 { 00:12:41.586 "subsystems": [ 00:12:41.586 { 00:12:41.586 "subsystem": "bdev", 00:12:41.586 "config": [ 00:12:41.586 { 00:12:41.586 "params": { 00:12:41.586 "io_mechanism": "libaio", 00:12:41.586 "conserve_cpu": false, 00:12:41.586 "filename": "/dev/nvme0n1", 00:12:41.586 "name": "xnvme_bdev" 00:12:41.586 }, 00:12:41.586 "method": "bdev_xnvme_create" 00:12:41.586 }, 00:12:41.586 { 00:12:41.586 "method": "bdev_wait_for_examine" 00:12:41.586 } 00:12:41.586 ] 00:12:41.586 } 00:12:41.586 ] 00:12:41.586 } 00:12:41.847 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:41.847 fio-3.35 00:12:41.847 Starting 1 thread 00:12:48.431 00:12:48.432 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69222: Thu Dec 5 02:57:18 2024 00:12:48.432 read: IOPS=31.5k, BW=123MiB/s (129MB/s)(616MiB/5001msec) 00:12:48.432 slat (usec): min=4, max=2011, avg=20.75, stdev=101.57 00:12:48.432 clat (usec): min=105, max=7843, avg=1467.06, stdev=520.05 00:12:48.432 lat (usec): min=196, max=7848, avg=1487.81, stdev=508.53 00:12:48.432 clat percentiles (usec): 00:12:48.432 | 1.00th=[ 302], 5.00th=[ 611], 10.00th=[ 799], 20.00th=[ 1045], 00:12:48.432 | 30.00th=[ 1205], 40.00th=[ 1352], 50.00th=[ 1483], 60.00th=[ 1598], 00:12:48.432 | 70.00th=[ 1713], 80.00th=[ 1860], 90.00th=[ 2073], 95.00th=[ 2278], 00:12:48.432 | 99.00th=[ 2835], 99.50th=[ 3163], 99.90th=[ 3949], 99.95th=[ 4178], 00:12:48.432 | 99.99th=[ 4621] 00:12:48.432 bw ( KiB/s): min=116256, max=137176, per=99.94%, avg=125995.56, stdev=6998.00, samples=9 00:12:48.432 iops : min=29064, max=34294, avg=31498.89, stdev=1749.50, samples=9 00:12:48.432 lat (usec) : 250=0.52%, 500=2.53%, 750=5.36%, 1000=9.49% 00:12:48.432 lat (msec) : 2=69.58%, 4=12.44%, 10=0.09% 00:12:48.432 cpu : usr=47.30%, sys=44.88%, ctx=13, majf=0, minf=764 00:12:48.432 IO depths : 1=0.6%, 2=1.4%, 4=3.4%, 8=8.5%, 16=22.6%, 32=61.4%, >=64=2.1% 00:12:48.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.432 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:48.432 issued rwts: total=157618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.432 00:12:48.432 Run status group 0 (all jobs): 00:12:48.432 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=616MiB (646MB), run=5001-5001msec 00:12:48.432 ----------------------------------------------------- 00:12:48.432 Suppressions used: 00:12:48.432 count bytes template 00:12:48.432 1 11 /usr/src/fio/parse.c 00:12:48.432 1 8 libtcmalloc_minimal.so 00:12:48.432 1 904 libcrypto.so 00:12:48.432 ----------------------------------------------------- 00:12:48.432 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.432 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.692 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.692 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.692 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.692 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.692 02:57:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.692 { 00:12:48.692 "subsystems": [ 00:12:48.692 { 00:12:48.692 "subsystem": "bdev", 00:12:48.692 "config": [ 00:12:48.692 { 00:12:48.692 "params": { 00:12:48.692 "io_mechanism": "libaio", 00:12:48.692 "conserve_cpu": false, 00:12:48.692 "filename": "/dev/nvme0n1", 00:12:48.692 "name": "xnvme_bdev" 00:12:48.692 }, 00:12:48.692 "method": "bdev_xnvme_create" 00:12:48.692 }, 00:12:48.692 { 00:12:48.692 "method": "bdev_wait_for_examine" 00:12:48.692 } 00:12:48.692 ] 00:12:48.692 } 00:12:48.692 ] 00:12:48.692 } 00:12:48.692 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.692 fio-3.35 00:12:48.692 Starting 1 thread 00:12:55.281 00:12:55.281 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69314: Thu Dec 5 02:57:25 2024 00:12:55.281 write: IOPS=32.4k, BW=127MiB/s (133MB/s)(634MiB/5001msec); 0 zone resets 00:12:55.281 slat (usec): min=4, max=1880, avg=23.30, stdev=96.10 00:12:55.281 clat (usec): min=106, max=4987, avg=1335.86, stdev=525.94 00:12:55.281 lat (usec): min=185, max=5020, avg=1359.16, stdev=516.36 00:12:55.281 clat percentiles (usec): 00:12:55.281 | 1.00th=[ 285], 5.00th=[ 523], 10.00th=[ 685], 20.00th=[ 898], 00:12:55.281 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1434], 00:12:55.281 | 70.00th=[ 1565], 80.00th=[ 1745], 90.00th=[ 1975], 95.00th=[ 2212], 00:12:55.281 | 99.00th=[ 2868], 99.50th=[ 3163], 99.90th=[ 3785], 99.95th=[ 4080], 00:12:55.281 | 99.99th=[ 4752] 00:12:55.281 bw ( KiB/s): min=120896, max=135392, per=99.18%, avg=128722.67, stdev=4376.21, samples=9 00:12:55.281 iops : min=30224, max=33848, avg=32180.67, stdev=1094.05, samples=9 00:12:55.281 lat (usec) : 250=0.64%, 500=3.83%, 750=8.11%, 1000=13.45% 00:12:55.281 lat (msec) : 2=64.70%, 4=9.21%, 10=0.06% 00:12:55.281 cpu : usr=36.56%, sys=53.46%, ctx=10, majf=0, minf=765 00:12:55.281 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=7.9%, 16=22.7%, 32=63.1%, >=64=2.1% 00:12:55.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.281 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:55.281 issued rwts: total=0,162273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.281 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:55.281 00:12:55.281 Run status group 0 (all jobs): 00:12:55.281 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=634MiB (665MB), run=5001-5001msec 00:12:55.542 ----------------------------------------------------- 00:12:55.542 Suppressions used: 00:12:55.542 count bytes template 00:12:55.542 1 11 /usr/src/fio/parse.c 00:12:55.542 1 8 libtcmalloc_minimal.so 00:12:55.542 1 904 libcrypto.so 00:12:55.542 ----------------------------------------------------- 00:12:55.542 00:12:55.542 ************************************ 00:12:55.542 END TEST xnvme_fio_plugin 00:12:55.542 ************************************ 00:12:55.542 00:12:55.542 real 0m13.838s 00:12:55.542 user 0m7.010s 00:12:55.542 sys 0m5.538s 00:12:55.542 02:57:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.542 02:57:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:55.542 02:57:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:55.542 02:57:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:55.542 02:57:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:55.542 02:57:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:55.542 02:57:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.542 02:57:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.542 02:57:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.542 ************************************ 00:12:55.542 START TEST xnvme_rpc 00:12:55.542 ************************************ 00:12:55.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69400 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69400 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69400 ']' 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:55.542 02:57:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.542 [2024-12-05 02:57:26.342886] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:55.542 [2024-12-05 02:57:26.343316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69400 ] 00:12:55.803 [2024-12-05 02:57:26.507760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.803 [2024-12-05 02:57:26.629331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.745 xnvme_bdev 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69400 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69400 ']' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69400 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69400 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69400' 00:12:56.745 killing process with pid 69400 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69400 00:12:56.745 02:57:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69400 00:12:58.661 ************************************ 00:12:58.661 END TEST xnvme_rpc 00:12:58.661 ************************************ 00:12:58.661 00:12:58.661 real 0m2.941s 00:12:58.661 user 0m2.923s 00:12:58.661 sys 0m0.474s 00:12:58.661 02:57:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.661 02:57:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.661 02:57:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:58.661 02:57:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.661 02:57:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.661 02:57:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.661 ************************************ 00:12:58.661 START TEST xnvme_bdevperf 00:12:58.661 ************************************ 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:58.661 02:57:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.661 { 00:12:58.661 "subsystems": [ 00:12:58.661 { 00:12:58.661 "subsystem": "bdev", 00:12:58.661 "config": [ 00:12:58.661 { 00:12:58.661 "params": { 00:12:58.661 "io_mechanism": "libaio", 00:12:58.661 "conserve_cpu": true, 00:12:58.661 "filename": "/dev/nvme0n1", 00:12:58.661 "name": "xnvme_bdev" 00:12:58.661 }, 00:12:58.661 "method": "bdev_xnvme_create" 00:12:58.661 }, 00:12:58.661 { 00:12:58.661 "method": "bdev_wait_for_examine" 00:12:58.661 } 00:12:58.661 ] 00:12:58.661 } 00:12:58.661 ] 00:12:58.661 } 00:12:58.661 [2024-12-05 02:57:29.337505] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:58.662 [2024-12-05 02:57:29.337647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69474 ] 00:12:58.662 [2024-12-05 02:57:29.493231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.923 [2024-12-05 02:57:29.617291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.184 Running I/O for 5 seconds... 00:13:01.508 30098.00 IOPS, 117.57 MiB/s [2024-12-05T02:57:33.293Z] 28387.00 IOPS, 110.89 MiB/s [2024-12-05T02:57:34.235Z] 28361.00 IOPS, 110.79 MiB/s [2024-12-05T02:57:35.178Z] 28473.00 IOPS, 111.22 MiB/s 00:13:04.334 Latency(us) 00:13:04.334 [2024-12-05T02:57:35.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.334 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:04.334 xnvme_bdev : 5.00 28390.43 110.90 0.00 0.00 2249.18 513.58 10183.29 00:13:04.334 [2024-12-05T02:57:35.178Z] =================================================================================================================== 00:13:04.334 [2024-12-05T02:57:35.178Z] Total : 28390.43 110.90 0.00 0.00 2249.18 513.58 10183.29 00:13:05.276 02:57:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.276 02:57:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:05.276 02:57:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:05.276 02:57:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:05.276 02:57:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:05.276 { 00:13:05.276 "subsystems": [ 00:13:05.276 { 00:13:05.276 "subsystem": "bdev", 00:13:05.276 "config": [ 00:13:05.276 { 00:13:05.276 "params": { 00:13:05.276 "io_mechanism": "libaio", 00:13:05.276 "conserve_cpu": true, 00:13:05.276 "filename": "/dev/nvme0n1", 00:13:05.276 "name": "xnvme_bdev" 00:13:05.276 }, 00:13:05.276 "method": "bdev_xnvme_create" 00:13:05.276 }, 00:13:05.276 { 00:13:05.276 "method": "bdev_wait_for_examine" 00:13:05.276 } 00:13:05.276 ] 00:13:05.276 } 00:13:05.276 ] 00:13:05.276 } 00:13:05.276 [2024-12-05 02:57:35.827045] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:05.276 [2024-12-05 02:57:35.827205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69554 ] 00:13:05.276 [2024-12-05 02:57:35.991513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.276 [2024-12-05 02:57:36.115670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.850 Running I/O for 5 seconds... 00:13:07.755 33068.00 IOPS, 129.17 MiB/s [2024-12-05T02:57:39.542Z] 32509.50 IOPS, 126.99 MiB/s [2024-12-05T02:57:40.487Z] 32566.00 IOPS, 127.21 MiB/s [2024-12-05T02:57:41.433Z] 32581.75 IOPS, 127.27 MiB/s [2024-12-05T02:57:41.433Z] 32541.80 IOPS, 127.12 MiB/s 00:13:10.589 Latency(us) 00:13:10.589 [2024-12-05T02:57:41.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.589 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:10.589 xnvme_bdev : 5.01 32509.63 126.99 0.00 0.00 1963.47 466.31 8116.38 00:13:10.589 [2024-12-05T02:57:41.433Z] =================================================================================================================== 00:13:10.589 [2024-12-05T02:57:41.433Z] Total : 32509.63 126.99 0.00 0.00 1963.47 466.31 8116.38 00:13:11.535 00:13:11.535 real 0m12.976s 00:13:11.535 user 0m4.843s 00:13:11.535 sys 0m6.531s 00:13:11.535 02:57:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.535 ************************************ 00:13:11.535 02:57:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.535 END TEST xnvme_bdevperf 00:13:11.535 ************************************ 00:13:11.535 02:57:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:11.535 02:57:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.535 02:57:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.535 02:57:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.535 ************************************ 00:13:11.535 START TEST xnvme_fio_plugin 00:13:11.535 ************************************ 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:11.535 02:57:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:11.535 { 00:13:11.535 "subsystems": [ 00:13:11.535 { 00:13:11.535 "subsystem": "bdev", 00:13:11.535 "config": [ 00:13:11.535 { 00:13:11.535 "params": { 00:13:11.535 "io_mechanism": "libaio", 00:13:11.535 "conserve_cpu": true, 00:13:11.535 "filename": "/dev/nvme0n1", 00:13:11.535 "name": "xnvme_bdev" 00:13:11.535 }, 00:13:11.535 "method": "bdev_xnvme_create" 00:13:11.535 }, 00:13:11.535 { 00:13:11.535 "method": "bdev_wait_for_examine" 00:13:11.535 } 00:13:11.535 ] 00:13:11.535 } 00:13:11.535 ] 00:13:11.535 } 00:13:11.797 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:11.797 fio-3.35 00:13:11.797 Starting 1 thread 00:13:18.478 00:13:18.478 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69673: Thu Dec 5 02:57:48 2024 00:13:18.478 read: IOPS=30.1k, BW=118MiB/s (123MB/s)(588MiB/5001msec) 00:13:18.478 slat (usec): min=4, max=1959, avg=23.80, stdev=110.72 00:13:18.478 clat (usec): min=106, max=4738, avg=1487.98, stdev=524.41 00:13:18.478 lat (usec): min=186, max=4862, avg=1511.78, stdev=510.83 00:13:18.478 clat percentiles (usec): 00:13:18.478 | 1.00th=[ 302], 5.00th=[ 644], 10.00th=[ 824], 20.00th=[ 1074], 00:13:18.478 | 30.00th=[ 1237], 40.00th=[ 1385], 50.00th=[ 1500], 60.00th=[ 1598], 00:13:18.478 | 70.00th=[ 1713], 80.00th=[ 1860], 90.00th=[ 2089], 95.00th=[ 2343], 00:13:18.478 | 99.00th=[ 3064], 99.50th=[ 3359], 99.90th=[ 3982], 99.95th=[ 4178], 00:13:18.478 | 99.99th=[ 4490] 00:13:18.478 bw ( KiB/s): min=117768, max=127696, per=100.00%, avg=120998.22, stdev=3305.87, samples=9 00:13:18.478 iops : min=29442, max=31924, avg=30249.56, stdev=826.47, samples=9 00:13:18.478 lat (usec) : 250=0.52%, 500=2.04%, 750=5.05%, 1000=9.01% 00:13:18.478 lat (msec) : 2=70.64%, 4=12.64%, 10=0.10% 00:13:18.478 cpu : usr=40.90%, sys=51.38%, ctx=10, majf=0, minf=764 00:13:18.478 IO depths : 1=0.5%, 2=1.2%, 4=3.0%, 8=8.0%, 16=22.5%, 32=62.6%, >=64=2.1% 00:13:18.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.478 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:18.478 issued rwts: total=150435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.478 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:18.478 00:13:18.478 Run status group 0 (all jobs): 00:13:18.478 READ: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=588MiB (616MB), run=5001-5001msec 00:13:18.773 ----------------------------------------------------- 00:13:18.773 Suppressions used: 00:13:18.773 count bytes template 00:13:18.773 1 11 /usr/src/fio/parse.c 00:13:18.773 1 8 libtcmalloc_minimal.so 00:13:18.773 1 904 libcrypto.so 00:13:18.773 ----------------------------------------------------- 00:13:18.773 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:18.773 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:18.774 02:57:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.774 { 00:13:18.774 "subsystems": [ 00:13:18.774 { 00:13:18.774 "subsystem": "bdev", 00:13:18.774 "config": [ 00:13:18.774 { 00:13:18.774 "params": { 00:13:18.774 "io_mechanism": "libaio", 00:13:18.774 "conserve_cpu": true, 00:13:18.774 "filename": "/dev/nvme0n1", 00:13:18.774 "name": "xnvme_bdev" 00:13:18.774 }, 00:13:18.774 "method": "bdev_xnvme_create" 00:13:18.774 }, 00:13:18.774 { 00:13:18.774 "method": "bdev_wait_for_examine" 00:13:18.774 } 00:13:18.774 ] 00:13:18.774 } 00:13:18.774 ] 00:13:18.774 } 00:13:18.774 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:18.774 fio-3.35 00:13:18.774 Starting 1 thread 00:13:25.377 00:13:25.377 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69766: Thu Dec 5 02:57:55 2024 00:13:25.377 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(555MiB/5001msec); 0 zone resets 00:13:25.377 slat (usec): min=4, max=1754, avg=28.94, stdev=110.34 00:13:25.377 clat (usec): min=106, max=5280, avg=1470.24, stdev=594.85 00:13:25.377 lat (usec): min=189, max=5400, avg=1499.18, stdev=583.52 00:13:25.377 clat percentiles (usec): 00:13:25.377 | 1.00th=[ 277], 5.00th=[ 570], 10.00th=[ 742], 20.00th=[ 963], 00:13:25.377 | 30.00th=[ 1139], 40.00th=[ 1303], 50.00th=[ 1450], 60.00th=[ 1598], 00:13:25.377 | 70.00th=[ 1729], 80.00th=[ 1909], 90.00th=[ 2180], 95.00th=[ 2507], 00:13:25.377 | 99.00th=[ 3261], 99.50th=[ 3523], 99.90th=[ 4080], 99.95th=[ 4293], 00:13:25.377 | 99.99th=[ 4817] 00:13:25.377 bw ( KiB/s): min=107000, max=119080, per=100.00%, avg=113925.33, stdev=4531.40, samples=9 00:13:25.377 iops : min=26750, max=29770, avg=28481.33, stdev=1132.85, samples=9 00:13:25.377 lat (usec) : 250=0.70%, 500=2.99%, 750=6.73%, 1000=11.40% 00:13:25.377 lat (msec) : 2=62.27%, 4=15.77%, 10=0.13% 00:13:25.377 cpu : usr=29.88%, sys=60.40%, ctx=53, majf=0, minf=765 00:13:25.377 IO depths : 1=0.3%, 2=0.9%, 4=2.6%, 8=8.0%, 16=24.0%, 32=62.1%, >=64=2.0% 00:13:25.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.377 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:25.377 issued rwts: total=0,142187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:25.377 00:13:25.377 Run status group 0 (all jobs): 00:13:25.377 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=555MiB (582MB), run=5001-5001msec 00:13:25.637 ----------------------------------------------------- 00:13:25.637 Suppressions used: 00:13:25.637 count bytes template 00:13:25.637 1 11 /usr/src/fio/parse.c 00:13:25.637 1 8 libtcmalloc_minimal.so 00:13:25.637 1 904 libcrypto.so 00:13:25.637 ----------------------------------------------------- 00:13:25.637 00:13:25.637 00:13:25.637 real 0m14.104s 00:13:25.637 user 0m6.529s 00:13:25.637 sys 0m6.290s 00:13:25.637 ************************************ 00:13:25.637 END TEST xnvme_fio_plugin 00:13:25.637 ************************************ 00:13:25.637 02:57:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.637 02:57:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:25.637 02:57:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:25.637 02:57:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.637 02:57:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.637 02:57:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.898 ************************************ 00:13:25.898 START TEST xnvme_rpc 00:13:25.898 ************************************ 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:25.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69852 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69852 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69852 ']' 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.898 02:57:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.898 [2024-12-05 02:57:56.588763] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:25.898 [2024-12-05 02:57:56.588932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69852 ] 00:13:26.159 [2024-12-05 02:57:56.757466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.159 [2024-12-05 02:57:56.901146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.101 xnvme_bdev 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:27.101 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69852 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69852 ']' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69852 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69852 00:13:27.102 killing process with pid 69852 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69852' 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69852 00:13:27.102 02:57:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69852 00:13:29.021 00:13:29.021 real 0m3.179s 00:13:29.021 user 0m3.041s 00:13:29.021 sys 0m0.602s 00:13:29.021 ************************************ 00:13:29.022 END TEST xnvme_rpc 00:13:29.022 ************************************ 00:13:29.022 02:57:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.022 02:57:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.022 02:57:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:29.022 02:57:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:29.022 02:57:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.022 02:57:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.022 ************************************ 00:13:29.022 START TEST xnvme_bdevperf 00:13:29.022 ************************************ 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.022 02:57:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:29.022 { 00:13:29.022 "subsystems": [ 00:13:29.022 { 00:13:29.022 "subsystem": "bdev", 00:13:29.022 "config": [ 00:13:29.022 { 00:13:29.022 "params": { 00:13:29.022 "io_mechanism": "io_uring", 00:13:29.022 "conserve_cpu": false, 00:13:29.022 "filename": "/dev/nvme0n1", 00:13:29.022 "name": "xnvme_bdev" 00:13:29.022 }, 00:13:29.022 "method": "bdev_xnvme_create" 00:13:29.022 }, 00:13:29.022 { 00:13:29.022 "method": "bdev_wait_for_examine" 00:13:29.022 } 00:13:29.022 ] 00:13:29.022 } 00:13:29.022 ] 00:13:29.022 } 00:13:29.022 [2024-12-05 02:57:59.800313] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:29.022 [2024-12-05 02:57:59.800455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69927 ] 00:13:29.284 [2024-12-05 02:57:59.956745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.284 [2024-12-05 02:58:00.091785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.543 Running I/O for 5 seconds... 00:13:31.871 33872.00 IOPS, 132.31 MiB/s [2024-12-05T02:58:03.661Z] 33286.50 IOPS, 130.03 MiB/s [2024-12-05T02:58:04.605Z] 33128.67 IOPS, 129.41 MiB/s [2024-12-05T02:58:05.549Z] 33235.25 IOPS, 129.83 MiB/s 00:13:34.705 Latency(us) 00:13:34.705 [2024-12-05T02:58:05.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.705 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:34.705 xnvme_bdev : 5.00 33154.42 129.51 0.00 0.00 1927.02 333.98 12149.37 00:13:34.705 [2024-12-05T02:58:05.549Z] =================================================================================================================== 00:13:34.705 [2024-12-05T02:58:05.549Z] Total : 33154.42 129.51 0.00 0.00 1927.02 333.98 12149.37 00:13:35.645 02:58:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.645 02:58:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.645 02:58:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:35.645 02:58:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.645 02:58:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.645 { 00:13:35.645 "subsystems": [ 00:13:35.645 { 00:13:35.645 "subsystem": "bdev", 00:13:35.645 "config": [ 00:13:35.645 { 00:13:35.645 "params": { 00:13:35.645 "io_mechanism": "io_uring", 00:13:35.645 "conserve_cpu": false, 00:13:35.645 "filename": "/dev/nvme0n1", 00:13:35.645 "name": "xnvme_bdev" 00:13:35.645 }, 00:13:35.645 "method": "bdev_xnvme_create" 00:13:35.645 }, 00:13:35.645 { 00:13:35.645 "method": "bdev_wait_for_examine" 00:13:35.645 } 00:13:35.645 ] 00:13:35.645 } 00:13:35.645 ] 00:13:35.645 } 00:13:35.645 [2024-12-05 02:58:06.252030] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:35.645 [2024-12-05 02:58:06.252239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69998 ] 00:13:35.645 [2024-12-05 02:58:06.419700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.906 [2024-12-05 02:58:06.536294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.166 Running I/O for 5 seconds... 00:13:38.051 35094.00 IOPS, 137.09 MiB/s [2024-12-05T02:58:09.838Z] 34247.00 IOPS, 133.78 MiB/s [2024-12-05T02:58:11.221Z] 33992.33 IOPS, 132.78 MiB/s [2024-12-05T02:58:12.166Z] 33853.25 IOPS, 132.24 MiB/s 00:13:41.322 Latency(us) 00:13:41.322 [2024-12-05T02:58:12.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.322 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:41.322 xnvme_bdev : 5.00 33926.54 132.53 0.00 0.00 1882.35 348.16 13208.02 00:13:41.322 [2024-12-05T02:58:12.167Z] =================================================================================================================== 00:13:41.323 [2024-12-05T02:58:12.167Z] Total : 33926.54 132.53 0.00 0.00 1882.35 348.16 13208.02 00:13:41.896 00:13:41.896 real 0m12.900s 00:13:41.896 user 0m6.276s 00:13:41.896 sys 0m6.364s 00:13:41.896 02:58:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.896 02:58:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.896 ************************************ 00:13:41.896 END TEST xnvme_bdevperf 00:13:41.896 ************************************ 00:13:41.896 02:58:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:41.896 02:58:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.896 02:58:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.896 02:58:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.896 ************************************ 00:13:41.896 START TEST xnvme_fio_plugin 00:13:41.896 ************************************ 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:41.896 02:58:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.157 { 00:13:42.157 "subsystems": [ 00:13:42.157 { 00:13:42.157 "subsystem": "bdev", 00:13:42.157 "config": [ 00:13:42.157 { 00:13:42.157 "params": { 00:13:42.157 "io_mechanism": "io_uring", 00:13:42.157 "conserve_cpu": false, 00:13:42.157 "filename": "/dev/nvme0n1", 00:13:42.157 "name": "xnvme_bdev" 00:13:42.157 }, 00:13:42.157 "method": "bdev_xnvme_create" 00:13:42.157 }, 00:13:42.157 { 00:13:42.157 "method": "bdev_wait_for_examine" 00:13:42.157 } 00:13:42.157 ] 00:13:42.157 } 00:13:42.157 ] 00:13:42.157 } 00:13:42.157 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:42.157 fio-3.35 00:13:42.157 Starting 1 thread 00:13:48.746 00:13:48.746 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70116: Thu Dec 5 02:58:18 2024 00:13:48.746 read: IOPS=33.9k, BW=132MiB/s (139MB/s)(661MiB/5001msec) 00:13:48.746 slat (usec): min=2, max=282, avg= 3.47, stdev= 2.16 00:13:48.746 clat (usec): min=1029, max=4102, avg=1748.33, stdev=317.98 00:13:48.746 lat (usec): min=1032, max=4134, avg=1751.80, stdev=318.37 00:13:48.746 clat percentiles (usec): 00:13:48.746 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1483], 00:13:48.746 | 30.00th=[ 1565], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1795], 00:13:48.746 | 70.00th=[ 1876], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:48.746 | 99.00th=[ 2704], 99.50th=[ 2933], 99.90th=[ 3458], 99.95th=[ 3621], 00:13:48.746 | 99.99th=[ 3949] 00:13:48.746 bw ( KiB/s): min=124672, max=154368, per=98.88%, avg=133916.00, stdev=9910.47, samples=9 00:13:48.746 iops : min=31168, max=38592, avg=33479.00, stdev=2477.62, samples=9 00:13:48.746 lat (msec) : 2=80.92%, 4=19.07%, 10=0.01% 00:13:48.746 cpu : usr=31.94%, sys=66.20%, ctx=27, majf=0, minf=762 00:13:48.746 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:13:48.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.746 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:48.746 issued rwts: total=169332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.746 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:48.746 00:13:48.746 Run status group 0 (all jobs): 00:13:48.746 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=661MiB (694MB), run=5001-5001msec 00:13:49.006 ----------------------------------------------------- 00:13:49.006 Suppressions used: 00:13:49.006 count bytes template 00:13:49.006 1 11 /usr/src/fio/parse.c 00:13:49.006 1 8 libtcmalloc_minimal.so 00:13:49.006 1 904 libcrypto.so 00:13:49.006 ----------------------------------------------------- 00:13:49.006 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:49.006 02:58:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.006 { 00:13:49.006 "subsystems": [ 00:13:49.006 { 00:13:49.006 "subsystem": "bdev", 00:13:49.006 "config": [ 00:13:49.006 { 00:13:49.006 "params": { 00:13:49.006 "io_mechanism": "io_uring", 00:13:49.006 "conserve_cpu": false, 00:13:49.006 "filename": "/dev/nvme0n1", 00:13:49.006 "name": "xnvme_bdev" 00:13:49.006 }, 00:13:49.006 "method": "bdev_xnvme_create" 00:13:49.006 }, 00:13:49.006 { 00:13:49.006 "method": "bdev_wait_for_examine" 00:13:49.006 } 00:13:49.006 ] 00:13:49.006 } 00:13:49.006 ] 00:13:49.006 } 00:13:49.266 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:49.266 fio-3.35 00:13:49.266 Starting 1 thread 00:13:55.854 00:13:55.854 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70208: Thu Dec 5 02:58:25 2024 00:13:55.854 write: IOPS=34.6k, BW=135MiB/s (142MB/s)(676MiB/5001msec); 0 zone resets 00:13:55.854 slat (nsec): min=2899, max=88730, avg=3787.13, stdev=1753.04 00:13:55.854 clat (usec): min=415, max=4751, avg=1698.00, stdev=255.36 00:13:55.854 lat (usec): min=419, max=4754, avg=1701.79, stdev=255.59 00:13:55.854 clat percentiles (usec): 00:13:55.854 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1401], 20.00th=[ 1500], 00:13:55.854 | 30.00th=[ 1565], 40.00th=[ 1614], 50.00th=[ 1680], 60.00th=[ 1745], 00:13:55.854 | 70.00th=[ 1795], 80.00th=[ 1893], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:55.854 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 3032], 99.95th=[ 3425], 00:13:55.854 | 99.99th=[ 4228] 00:13:55.854 bw ( KiB/s): min=131584, max=159720, per=100.00%, avg=138437.33, stdev=8435.93, samples=9 00:13:55.854 iops : min=32896, max=39930, avg=34609.33, stdev=2108.98, samples=9 00:13:55.854 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:13:55.854 lat (msec) : 2=89.23%, 4=10.69%, 10=0.04% 00:13:55.854 cpu : usr=32.58%, sys=66.28%, ctx=9, majf=0, minf=763 00:13:55.854 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:55.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.854 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:55.854 issued rwts: total=0,173027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:55.854 00:13:55.854 Run status group 0 (all jobs): 00:13:55.854 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=676MiB (709MB), run=5001-5001msec 00:13:55.854 ----------------------------------------------------- 00:13:55.854 Suppressions used: 00:13:55.854 count bytes template 00:13:55.854 1 11 /usr/src/fio/parse.c 00:13:55.854 1 8 libtcmalloc_minimal.so 00:13:55.854 1 904 libcrypto.so 00:13:55.854 ----------------------------------------------------- 00:13:55.854 00:13:55.854 00:13:55.854 real 0m13.903s 00:13:55.854 user 0m6.184s 00:13:55.854 sys 0m7.234s 00:13:55.854 02:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.854 ************************************ 00:13:55.854 02:58:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:55.854 END TEST xnvme_fio_plugin 00:13:55.854 ************************************ 00:13:55.854 02:58:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:55.854 02:58:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:55.854 02:58:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:55.854 02:58:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:55.854 02:58:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:55.854 02:58:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.854 02:58:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.854 ************************************ 00:13:55.854 START TEST xnvme_rpc 00:13:55.854 ************************************ 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:55.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70294 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70294 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70294 ']' 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.854 02:58:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:55.855 02:58:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.115 [2024-12-05 02:58:26.777426] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:56.115 [2024-12-05 02:58:26.777591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70294 ] 00:13:56.115 [2024-12-05 02:58:26.947507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.377 [2024-12-05 02:58:27.066595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.950 xnvme_bdev 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.950 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.211 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70294 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70294 ']' 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70294 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70294 00:13:57.212 killing process with pid 70294 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70294' 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70294 00:13:57.212 02:58:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70294 00:13:59.128 00:13:59.128 real 0m2.906s 00:13:59.128 user 0m2.877s 00:13:59.128 sys 0m0.503s 00:13:59.128 02:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.128 ************************************ 00:13:59.128 END TEST xnvme_rpc 00:13:59.128 ************************************ 00:13:59.128 02:58:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.128 02:58:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:59.128 02:58:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.128 02:58:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.128 02:58:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.128 ************************************ 00:13:59.128 START TEST xnvme_bdevperf 00:13:59.128 ************************************ 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.128 02:58:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.128 { 00:13:59.128 "subsystems": [ 00:13:59.128 { 00:13:59.128 "subsystem": "bdev", 00:13:59.128 "config": [ 00:13:59.128 { 00:13:59.128 "params": { 00:13:59.128 "io_mechanism": "io_uring", 00:13:59.128 "conserve_cpu": true, 00:13:59.128 "filename": "/dev/nvme0n1", 00:13:59.128 "name": "xnvme_bdev" 00:13:59.128 }, 00:13:59.128 "method": "bdev_xnvme_create" 00:13:59.128 }, 00:13:59.128 { 00:13:59.128 "method": "bdev_wait_for_examine" 00:13:59.128 } 00:13:59.128 ] 00:13:59.128 } 00:13:59.128 ] 00:13:59.128 } 00:13:59.128 [2024-12-05 02:58:29.731394] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:59.128 [2024-12-05 02:58:29.731533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70368 ] 00:13:59.128 [2024-12-05 02:58:29.896811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.434 [2024-12-05 02:58:30.019463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.736 Running I/O for 5 seconds... 00:14:01.629 34341.00 IOPS, 134.14 MiB/s [2024-12-05T02:58:33.416Z] 34854.50 IOPS, 136.15 MiB/s [2024-12-05T02:58:34.359Z] 35363.67 IOPS, 138.14 MiB/s [2024-12-05T02:58:35.751Z] 35415.25 IOPS, 138.34 MiB/s [2024-12-05T02:58:35.751Z] 35398.40 IOPS, 138.28 MiB/s 00:14:04.907 Latency(us) 00:14:04.907 [2024-12-05T02:58:35.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.907 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:04.907 xnvme_bdev : 5.00 35383.44 138.22 0.00 0.00 1805.02 1008.25 11695.66 00:14:04.907 [2024-12-05T02:58:35.751Z] =================================================================================================================== 00:14:04.907 [2024-12-05T02:58:35.751Z] Total : 35383.44 138.22 0.00 0.00 1805.02 1008.25 11695.66 00:14:05.478 02:58:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.478 02:58:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:05.478 02:58:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:05.478 02:58:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:05.478 02:58:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.478 { 00:14:05.478 "subsystems": [ 00:14:05.478 { 00:14:05.478 "subsystem": "bdev", 00:14:05.478 "config": [ 00:14:05.478 { 00:14:05.478 "params": { 00:14:05.478 "io_mechanism": "io_uring", 00:14:05.478 "conserve_cpu": true, 00:14:05.478 "filename": "/dev/nvme0n1", 00:14:05.478 "name": "xnvme_bdev" 00:14:05.478 }, 00:14:05.478 "method": "bdev_xnvme_create" 00:14:05.478 }, 00:14:05.478 { 00:14:05.478 "method": "bdev_wait_for_examine" 00:14:05.478 } 00:14:05.478 ] 00:14:05.478 } 00:14:05.478 ] 00:14:05.478 } 00:14:05.478 [2024-12-05 02:58:36.205551] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:05.478 [2024-12-05 02:58:36.205957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70443 ] 00:14:05.739 [2024-12-05 02:58:36.371820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.739 [2024-12-05 02:58:36.492368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.000 Running I/O for 5 seconds... 00:14:08.332 37662.00 IOPS, 147.12 MiB/s [2024-12-05T02:58:40.119Z] 36829.00 IOPS, 143.86 MiB/s [2024-12-05T02:58:41.061Z] 36748.67 IOPS, 143.55 MiB/s [2024-12-05T02:58:42.002Z] 36771.50 IOPS, 143.64 MiB/s 00:14:11.158 Latency(us) 00:14:11.158 [2024-12-05T02:58:42.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.158 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:11.158 xnvme_bdev : 5.00 36723.71 143.45 0.00 0.00 1738.86 957.83 6125.10 00:14:11.158 [2024-12-05T02:58:42.002Z] =================================================================================================================== 00:14:11.158 [2024-12-05T02:58:42.002Z] Total : 36723.71 143.45 0.00 0.00 1738.86 957.83 6125.10 00:14:11.728 ************************************ 00:14:11.728 END TEST xnvme_bdevperf 00:14:11.728 ************************************ 00:14:11.728 00:14:11.728 real 0m12.902s 00:14:11.728 user 0m8.537s 00:14:11.728 sys 0m3.823s 00:14:11.728 02:58:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.728 02:58:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.990 02:58:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:11.990 02:58:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.990 02:58:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.990 02:58:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.990 ************************************ 00:14:11.990 START TEST xnvme_fio_plugin 00:14:11.990 ************************************ 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:11.990 02:58:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.990 { 00:14:11.990 "subsystems": [ 00:14:11.990 { 00:14:11.990 "subsystem": "bdev", 00:14:11.990 "config": [ 00:14:11.990 { 00:14:11.990 "params": { 00:14:11.990 "io_mechanism": "io_uring", 00:14:11.990 "conserve_cpu": true, 00:14:11.990 "filename": "/dev/nvme0n1", 00:14:11.990 "name": "xnvme_bdev" 00:14:11.990 }, 00:14:11.990 "method": "bdev_xnvme_create" 00:14:11.990 }, 00:14:11.990 { 00:14:11.990 "method": "bdev_wait_for_examine" 00:14:11.990 } 00:14:11.990 ] 00:14:11.990 } 00:14:11.990 ] 00:14:11.990 } 00:14:11.990 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:11.990 fio-3.35 00:14:11.990 Starting 1 thread 00:14:18.579 00:14:18.579 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70557: Thu Dec 5 02:58:48 2024 00:14:18.579 read: IOPS=35.0k, BW=137MiB/s (143MB/s)(684MiB/5001msec) 00:14:18.579 slat (nsec): min=2860, max=57396, avg=3387.21, stdev=1497.44 00:14:18.579 clat (usec): min=1078, max=4752, avg=1691.69, stdev=227.89 00:14:18.579 lat (usec): min=1081, max=4755, avg=1695.07, stdev=228.08 00:14:18.579 clat percentiles (usec): 00:14:18.579 | 1.00th=[ 1287], 5.00th=[ 1385], 10.00th=[ 1434], 20.00th=[ 1500], 00:14:18.579 | 30.00th=[ 1565], 40.00th=[ 1614], 50.00th=[ 1663], 60.00th=[ 1713], 00:14:18.579 | 70.00th=[ 1778], 80.00th=[ 1860], 90.00th=[ 1975], 95.00th=[ 2114], 00:14:18.579 | 99.00th=[ 2376], 99.50th=[ 2474], 99.90th=[ 2835], 99.95th=[ 2999], 00:14:18.579 | 99.99th=[ 3589] 00:14:18.579 bw ( KiB/s): min=137216, max=145408, per=100.00%, avg=140088.00, stdev=2400.44, samples=9 00:14:18.579 iops : min=34304, max=36352, avg=35022.00, stdev=600.11, samples=9 00:14:18.579 lat (msec) : 2=90.98%, 4=9.02%, 10=0.01% 00:14:18.579 cpu : usr=60.48%, sys=36.20%, ctx=13, majf=0, minf=762 00:14:18.579 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:18.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.579 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:18.579 issued rwts: total=175064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:18.579 00:14:18.579 Run status group 0 (all jobs): 00:14:18.579 READ: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=684MiB (717MB), run=5001-5001msec 00:14:18.840 ----------------------------------------------------- 00:14:18.840 Suppressions used: 00:14:18.840 count bytes template 00:14:18.840 1 11 /usr/src/fio/parse.c 00:14:18.840 1 8 libtcmalloc_minimal.so 00:14:18.840 1 904 libcrypto.so 00:14:18.840 ----------------------------------------------------- 00:14:18.840 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.840 02:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.840 { 00:14:18.840 "subsystems": [ 00:14:18.840 { 00:14:18.840 "subsystem": "bdev", 00:14:18.840 "config": [ 00:14:18.840 { 00:14:18.840 "params": { 00:14:18.840 "io_mechanism": "io_uring", 00:14:18.840 "conserve_cpu": true, 00:14:18.840 "filename": "/dev/nvme0n1", 00:14:18.840 "name": "xnvme_bdev" 00:14:18.840 }, 00:14:18.840 "method": "bdev_xnvme_create" 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "method": "bdev_wait_for_examine" 00:14:18.840 } 00:14:18.840 ] 00:14:18.840 } 00:14:18.840 ] 00:14:18.840 } 00:14:19.100 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:19.100 fio-3.35 00:14:19.100 Starting 1 thread 00:14:25.680 00:14:25.680 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70654: Thu Dec 5 02:58:55 2024 00:14:25.680 write: IOPS=37.1k, BW=145MiB/s (152MB/s)(725MiB/5001msec); 0 zone resets 00:14:25.680 slat (usec): min=2, max=269, avg= 3.69, stdev= 1.63 00:14:25.680 clat (usec): min=484, max=5115, avg=1580.53, stdev=250.17 00:14:25.680 lat (usec): min=488, max=5119, avg=1584.22, stdev=250.39 00:14:25.680 clat percentiles (usec): 00:14:25.680 | 1.00th=[ 1139], 5.00th=[ 1221], 10.00th=[ 1287], 20.00th=[ 1369], 00:14:25.680 | 30.00th=[ 1434], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1614], 00:14:25.680 | 70.00th=[ 1680], 80.00th=[ 1762], 90.00th=[ 1893], 95.00th=[ 2008], 00:14:25.680 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 2933], 99.95th=[ 3163], 00:14:25.680 | 99.99th=[ 3490] 00:14:25.680 bw ( KiB/s): min=141040, max=158136, per=100.00%, avg=148414.22, stdev=4952.32, samples=9 00:14:25.680 iops : min=35260, max=39534, avg=37103.56, stdev=1238.08, samples=9 00:14:25.680 lat (usec) : 500=0.01%, 1000=0.02% 00:14:25.680 lat (msec) : 2=94.70%, 4=5.28%, 10=0.01% 00:14:25.680 cpu : usr=70.78%, sys=25.90%, ctx=17, majf=0, minf=763 00:14:25.680 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:25.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.680 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:25.680 issued rwts: total=0,185536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.680 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:25.680 00:14:25.680 Run status group 0 (all jobs): 00:14:25.680 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=725MiB (760MB), run=5001-5001msec 00:14:25.941 ----------------------------------------------------- 00:14:25.941 Suppressions used: 00:14:25.941 count bytes template 00:14:25.941 1 11 /usr/src/fio/parse.c 00:14:25.941 1 8 libtcmalloc_minimal.so 00:14:25.941 1 904 libcrypto.so 00:14:25.941 ----------------------------------------------------- 00:14:25.941 00:14:25.941 00:14:25.941 real 0m13.954s 00:14:25.941 user 0m9.527s 00:14:25.941 sys 0m3.770s 00:14:25.941 02:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.941 02:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 ************************************ 00:14:25.941 END TEST xnvme_fio_plugin 00:14:25.941 ************************************ 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:25.941 02:58:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:25.941 02:58:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.941 02:58:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.941 02:58:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.941 ************************************ 00:14:25.941 START TEST xnvme_rpc 00:14:25.941 ************************************ 00:14:25.941 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:25.941 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:25.941 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:25.941 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:25.941 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:25.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70735 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70735 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70735 ']' 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.942 02:58:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:25.942 [2024-12-05 02:58:56.751371] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:25.942 [2024-12-05 02:58:56.751528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70735 ] 00:14:26.205 [2024-12-05 02:58:56.919046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.466 [2024-12-05 02:58:57.059599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.040 xnvme_bdev 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:27.040 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.303 02:58:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70735 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70735 ']' 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70735 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70735 00:14:27.303 killing process with pid 70735 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70735' 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70735 00:14:27.303 02:58:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70735 00:14:29.221 00:14:29.221 real 0m3.188s 00:14:29.221 user 0m3.082s 00:14:29.221 sys 0m0.602s 00:14:29.221 ************************************ 00:14:29.221 END TEST xnvme_rpc 00:14:29.221 ************************************ 00:14:29.221 02:58:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.221 02:58:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.221 02:58:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:29.221 02:58:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.221 02:58:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.221 02:58:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.221 ************************************ 00:14:29.221 START TEST xnvme_bdevperf 00:14:29.221 ************************************ 00:14:29.221 02:58:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:29.221 02:58:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:29.221 02:58:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:29.222 02:58:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.222 02:58:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:29.222 02:58:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:29.222 02:58:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.222 02:58:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.222 { 00:14:29.222 "subsystems": [ 00:14:29.222 { 00:14:29.222 "subsystem": "bdev", 00:14:29.222 "config": [ 00:14:29.222 { 00:14:29.222 "params": { 00:14:29.222 "io_mechanism": "io_uring_cmd", 00:14:29.222 "conserve_cpu": false, 00:14:29.222 "filename": "/dev/ng0n1", 00:14:29.222 "name": "xnvme_bdev" 00:14:29.222 }, 00:14:29.222 "method": "bdev_xnvme_create" 00:14:29.222 }, 00:14:29.222 { 00:14:29.222 "method": "bdev_wait_for_examine" 00:14:29.222 } 00:14:29.222 ] 00:14:29.222 } 00:14:29.222 ] 00:14:29.222 } 00:14:29.222 [2024-12-05 02:58:59.999667] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:29.222 [2024-12-05 02:58:59.999810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70809 ] 00:14:29.484 [2024-12-05 02:59:00.164480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.484 [2024-12-05 02:59:00.307872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.057 Running I/O for 5 seconds... 00:14:31.945 36480.00 IOPS, 142.50 MiB/s [2024-12-05T02:59:03.734Z] 39104.00 IOPS, 152.75 MiB/s [2024-12-05T02:59:04.679Z] 39552.00 IOPS, 154.50 MiB/s [2024-12-05T02:59:06.059Z] 39808.00 IOPS, 155.50 MiB/s [2024-12-05T02:59:06.059Z] 39746.60 IOPS, 155.26 MiB/s 00:14:35.215 Latency(us) 00:14:35.215 [2024-12-05T02:59:06.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.215 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:35.215 xnvme_bdev : 5.00 39742.06 155.24 0.00 0.00 1607.08 326.10 12250.19 00:14:35.215 [2024-12-05T02:59:06.060Z] =================================================================================================================== 00:14:35.216 [2024-12-05T02:59:06.060Z] Total : 39742.06 155.24 0.00 0.00 1607.08 326.10 12250.19 00:14:35.789 02:59:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:35.789 02:59:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:35.789 02:59:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:35.789 02:59:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:35.789 02:59:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 { 00:14:35.789 "subsystems": [ 00:14:35.789 { 00:14:35.789 "subsystem": "bdev", 00:14:35.789 "config": [ 00:14:35.789 { 00:14:35.789 "params": { 00:14:35.789 "io_mechanism": "io_uring_cmd", 00:14:35.789 "conserve_cpu": false, 00:14:35.789 "filename": "/dev/ng0n1", 00:14:35.789 "name": "xnvme_bdev" 00:14:35.789 }, 00:14:35.789 "method": "bdev_xnvme_create" 00:14:35.789 }, 00:14:35.789 { 00:14:35.789 "method": "bdev_wait_for_examine" 00:14:35.789 } 00:14:35.789 ] 00:14:35.789 } 00:14:35.789 ] 00:14:35.789 } 00:14:35.789 [2024-12-05 02:59:06.597470] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:35.789 [2024-12-05 02:59:06.597617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70889 ] 00:14:36.051 [2024-12-05 02:59:06.763764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.312 [2024-12-05 02:59:06.902805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.573 Running I/O for 5 seconds... 00:14:38.539 34615.00 IOPS, 135.21 MiB/s [2024-12-05T02:59:10.325Z] 34524.00 IOPS, 134.86 MiB/s [2024-12-05T02:59:11.269Z] 35042.67 IOPS, 136.89 MiB/s [2024-12-05T02:59:12.655Z] 35235.75 IOPS, 137.64 MiB/s [2024-12-05T02:59:12.655Z] 35254.20 IOPS, 137.71 MiB/s 00:14:41.811 Latency(us) 00:14:41.811 [2024-12-05T02:59:12.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.811 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:41.811 xnvme_bdev : 5.00 35243.96 137.67 0.00 0.00 1811.70 382.82 10586.58 00:14:41.811 [2024-12-05T02:59:12.655Z] =================================================================================================================== 00:14:41.811 [2024-12-05T02:59:12.655Z] Total : 35243.96 137.67 0.00 0.00 1811.70 382.82 10586.58 00:14:42.384 02:59:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.384 02:59:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:42.384 02:59:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.385 02:59:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.385 02:59:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.385 { 00:14:42.385 "subsystems": [ 00:14:42.385 { 00:14:42.385 "subsystem": "bdev", 00:14:42.385 "config": [ 00:14:42.385 { 00:14:42.385 "params": { 00:14:42.385 "io_mechanism": "io_uring_cmd", 00:14:42.385 "conserve_cpu": false, 00:14:42.385 "filename": "/dev/ng0n1", 00:14:42.385 "name": "xnvme_bdev" 00:14:42.385 }, 00:14:42.385 "method": "bdev_xnvme_create" 00:14:42.385 }, 00:14:42.385 { 00:14:42.385 "method": "bdev_wait_for_examine" 00:14:42.385 } 00:14:42.385 ] 00:14:42.385 } 00:14:42.385 ] 00:14:42.385 } 00:14:42.385 [2024-12-05 02:59:13.118617] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:42.385 [2024-12-05 02:59:13.118761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70963 ] 00:14:42.645 [2024-12-05 02:59:13.281918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.646 [2024-12-05 02:59:13.406677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.906 Running I/O for 5 seconds... 00:14:44.862 76992.00 IOPS, 300.75 MiB/s [2024-12-05T02:59:17.093Z] 77504.00 IOPS, 302.75 MiB/s [2024-12-05T02:59:18.039Z] 78634.67 IOPS, 307.17 MiB/s [2024-12-05T02:59:18.984Z] 78720.00 IOPS, 307.50 MiB/s [2024-12-05T02:59:18.984Z] 80179.20 IOPS, 313.20 MiB/s 00:14:48.140 Latency(us) 00:14:48.140 [2024-12-05T02:59:18.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.140 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:48.140 xnvme_bdev : 5.00 80153.05 313.10 0.00 0.00 795.05 529.33 2495.41 00:14:48.140 [2024-12-05T02:59:18.984Z] =================================================================================================================== 00:14:48.140 [2024-12-05T02:59:18.984Z] Total : 80153.05 313.10 0.00 0.00 795.05 529.33 2495.41 00:14:48.713 02:59:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.713 02:59:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:48.713 02:59:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.713 02:59:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.713 02:59:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.713 { 00:14:48.713 "subsystems": [ 00:14:48.713 { 00:14:48.713 "subsystem": "bdev", 00:14:48.713 "config": [ 00:14:48.713 { 00:14:48.713 "params": { 00:14:48.713 "io_mechanism": "io_uring_cmd", 00:14:48.713 "conserve_cpu": false, 00:14:48.713 "filename": "/dev/ng0n1", 00:14:48.713 "name": "xnvme_bdev" 00:14:48.713 }, 00:14:48.713 "method": "bdev_xnvme_create" 00:14:48.713 }, 00:14:48.713 { 00:14:48.713 "method": "bdev_wait_for_examine" 00:14:48.713 } 00:14:48.713 ] 00:14:48.713 } 00:14:48.713 ] 00:14:48.713 } 00:14:48.713 [2024-12-05 02:59:19.316682] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:48.713 [2024-12-05 02:59:19.316798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71037 ] 00:14:48.713 [2024-12-05 02:59:19.475160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.974 [2024-12-05 02:59:19.561189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.974 Running I/O for 5 seconds... 00:14:51.297 51785.00 IOPS, 202.29 MiB/s [2024-12-05T02:59:23.084Z] 47980.50 IOPS, 187.42 MiB/s [2024-12-05T02:59:24.027Z] 44399.67 IOPS, 173.44 MiB/s [2024-12-05T02:59:24.963Z] 43380.50 IOPS, 169.46 MiB/s [2024-12-05T02:59:24.963Z] 42087.80 IOPS, 164.41 MiB/s 00:14:54.119 Latency(us) 00:14:54.119 [2024-12-05T02:59:24.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.119 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:54.119 xnvme_bdev : 5.00 42075.82 164.36 0.00 0.00 1516.99 207.95 19559.98 00:14:54.119 [2024-12-05T02:59:24.963Z] =================================================================================================================== 00:14:54.119 [2024-12-05T02:59:24.963Z] Total : 42075.82 164.36 0.00 0.00 1516.99 207.95 19559.98 00:14:55.059 00:14:55.059 real 0m25.644s 00:14:55.059 user 0m13.742s 00:14:55.059 sys 0m11.413s 00:14:55.059 ************************************ 00:14:55.059 END TEST xnvme_bdevperf 00:14:55.059 ************************************ 00:14:55.059 02:59:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.059 02:59:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.059 02:59:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:55.059 02:59:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.059 02:59:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.059 02:59:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.059 ************************************ 00:14:55.059 START TEST xnvme_fio_plugin 00:14:55.059 ************************************ 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:55.059 02:59:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.059 { 00:14:55.059 "subsystems": [ 00:14:55.059 { 00:14:55.059 "subsystem": "bdev", 00:14:55.059 "config": [ 00:14:55.059 { 00:14:55.059 "params": { 00:14:55.060 "io_mechanism": "io_uring_cmd", 00:14:55.060 "conserve_cpu": false, 00:14:55.060 "filename": "/dev/ng0n1", 00:14:55.060 "name": "xnvme_bdev" 00:14:55.060 }, 00:14:55.060 "method": "bdev_xnvme_create" 00:14:55.060 }, 00:14:55.060 { 00:14:55.060 "method": "bdev_wait_for_examine" 00:14:55.060 } 00:14:55.060 ] 00:14:55.060 } 00:14:55.060 ] 00:14:55.060 } 00:14:55.060 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:55.060 fio-3.35 00:14:55.060 Starting 1 thread 00:15:01.651 00:15:01.651 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71151: Thu Dec 5 02:59:31 2024 00:15:01.651 read: IOPS=37.4k, BW=146MiB/s (153MB/s)(731MiB/5002msec) 00:15:01.651 slat (usec): min=2, max=288, avg= 3.98, stdev= 2.17 00:15:01.651 clat (usec): min=994, max=6236, avg=1548.13, stdev=208.32 00:15:01.651 lat (usec): min=997, max=6245, avg=1552.10, stdev=208.72 00:15:01.651 clat percentiles (usec): 00:15:01.651 | 1.00th=[ 1188], 5.00th=[ 1287], 10.00th=[ 1319], 20.00th=[ 1385], 00:15:01.651 | 30.00th=[ 1434], 40.00th=[ 1467], 50.00th=[ 1516], 60.00th=[ 1565], 00:15:01.651 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1811], 95.00th=[ 1942], 00:15:01.651 | 99.00th=[ 2212], 99.50th=[ 2311], 99.90th=[ 2507], 99.95th=[ 2638], 00:15:01.651 | 99.99th=[ 3425] 00:15:01.651 bw ( KiB/s): min=143856, max=154624, per=99.91%, avg=149467.89, stdev=3117.73, samples=9 00:15:01.651 iops : min=35964, max=38656, avg=37366.89, stdev=779.41, samples=9 00:15:01.651 lat (usec) : 1000=0.01% 00:15:01.651 lat (msec) : 2=96.33%, 4=3.66%, 10=0.01% 00:15:01.651 cpu : usr=32.93%, sys=65.63%, ctx=14, majf=0, minf=762 00:15:01.651 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:01.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.651 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:01.651 issued rwts: total=187069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:01.651 00:15:01.651 Run status group 0 (all jobs): 00:15:01.651 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=731MiB (766MB), run=5002-5002msec 00:15:01.912 ----------------------------------------------------- 00:15:01.912 Suppressions used: 00:15:01.912 count bytes template 00:15:01.912 1 11 /usr/src/fio/parse.c 00:15:01.912 1 8 libtcmalloc_minimal.so 00:15:01.912 1 904 libcrypto.so 00:15:01.912 ----------------------------------------------------- 00:15:01.912 00:15:01.912 02:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.912 02:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.913 02:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.913 { 00:15:01.913 "subsystems": [ 00:15:01.913 { 00:15:01.913 "subsystem": "bdev", 00:15:01.913 "config": [ 00:15:01.913 { 00:15:01.913 "params": { 00:15:01.913 "io_mechanism": "io_uring_cmd", 00:15:01.913 "conserve_cpu": false, 00:15:01.913 "filename": "/dev/ng0n1", 00:15:01.913 "name": "xnvme_bdev" 00:15:01.913 }, 00:15:01.913 "method": "bdev_xnvme_create" 00:15:01.913 }, 00:15:01.913 { 00:15:01.913 "method": "bdev_wait_for_examine" 00:15:01.913 } 00:15:01.913 ] 00:15:01.913 } 00:15:01.913 ] 00:15:01.913 } 00:15:01.913 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.913 fio-3.35 00:15:01.913 Starting 1 thread 00:15:08.507 00:15:08.507 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71248: Thu Dec 5 02:59:38 2024 00:15:08.507 write: IOPS=38.2k, BW=149MiB/s (156MB/s)(745MiB/5001msec); 0 zone resets 00:15:08.507 slat (usec): min=2, max=203, avg= 4.23, stdev= 2.19 00:15:08.507 clat (usec): min=188, max=6289, avg=1511.88, stdev=273.61 00:15:08.507 lat (usec): min=192, max=6293, avg=1516.12, stdev=273.95 00:15:08.507 clat percentiles (usec): 00:15:08.507 | 1.00th=[ 873], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1319], 00:15:08.507 | 30.00th=[ 1385], 40.00th=[ 1434], 50.00th=[ 1483], 60.00th=[ 1532], 00:15:08.507 | 70.00th=[ 1598], 80.00th=[ 1680], 90.00th=[ 1811], 95.00th=[ 1958], 00:15:08.507 | 99.00th=[ 2343], 99.50th=[ 2606], 99.90th=[ 3621], 99.95th=[ 3916], 00:15:08.507 | 99.99th=[ 4686] 00:15:08.507 bw ( KiB/s): min=147760, max=163936, per=100.00%, avg=153050.67, stdev=4773.05, samples=9 00:15:08.507 iops : min=36940, max=40984, avg=38262.67, stdev=1193.26, samples=9 00:15:08.507 lat (usec) : 250=0.01%, 500=0.09%, 750=0.41%, 1000=1.30% 00:15:08.507 lat (msec) : 2=94.23%, 4=3.92%, 10=0.04% 00:15:08.507 cpu : usr=34.14%, sys=64.42%, ctx=27, majf=0, minf=763 00:15:08.507 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.8%, 16=24.0%, 32=52.4%, >=64=1.7% 00:15:08.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.507 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:08.507 issued rwts: total=0,190812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.507 00:15:08.507 Run status group 0 (all jobs): 00:15:08.507 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=745MiB (782MB), run=5001-5001msec 00:15:08.767 ----------------------------------------------------- 00:15:08.767 Suppressions used: 00:15:08.767 count bytes template 00:15:08.767 1 11 /usr/src/fio/parse.c 00:15:08.767 1 8 libtcmalloc_minimal.so 00:15:08.767 1 904 libcrypto.so 00:15:08.767 ----------------------------------------------------- 00:15:08.767 00:15:08.767 ************************************ 00:15:08.767 END TEST xnvme_fio_plugin 00:15:08.767 ************************************ 00:15:08.767 00:15:08.767 real 0m13.797s 00:15:08.767 user 0m6.240s 00:15:08.767 sys 0m7.085s 00:15:08.767 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.767 02:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.767 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:08.767 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:08.767 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:08.767 02:59:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:08.767 02:59:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.767 02:59:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.767 02:59:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.767 ************************************ 00:15:08.767 START TEST xnvme_rpc 00:15:08.767 ************************************ 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71331 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71331 00:15:08.767 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71331 ']' 00:15:08.768 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.768 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.768 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.768 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.768 02:59:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.768 02:59:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.768 [2024-12-05 02:59:39.592019] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:08.768 [2024-12-05 02:59:39.592436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71331 ] 00:15:09.028 [2024-12-05 02:59:39.755982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.289 [2024-12-05 02:59:39.872006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.862 xnvme_bdev 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.862 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71331 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71331 ']' 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71331 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71331 00:15:10.124 killing process with pid 71331 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71331' 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71331 00:15:10.124 02:59:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71331 00:15:12.041 ************************************ 00:15:12.041 00:15:12.041 real 0m2.921s 00:15:12.041 user 0m2.925s 00:15:12.041 sys 0m0.496s 00:15:12.041 02:59:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.041 02:59:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.041 END TEST xnvme_rpc 00:15:12.041 ************************************ 00:15:12.041 02:59:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:12.042 02:59:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.042 02:59:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.042 02:59:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.042 ************************************ 00:15:12.042 START TEST xnvme_bdevperf 00:15:12.042 ************************************ 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:12.042 02:59:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:12.042 { 00:15:12.042 "subsystems": [ 00:15:12.042 { 00:15:12.042 "subsystem": "bdev", 00:15:12.042 "config": [ 00:15:12.042 { 00:15:12.042 "params": { 00:15:12.042 "io_mechanism": "io_uring_cmd", 00:15:12.042 "conserve_cpu": true, 00:15:12.042 "filename": "/dev/ng0n1", 00:15:12.042 "name": "xnvme_bdev" 00:15:12.042 }, 00:15:12.042 "method": "bdev_xnvme_create" 00:15:12.042 }, 00:15:12.042 { 00:15:12.042 "method": "bdev_wait_for_examine" 00:15:12.042 } 00:15:12.042 ] 00:15:12.042 } 00:15:12.042 ] 00:15:12.042 } 00:15:12.042 [2024-12-05 02:59:42.563026] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:12.042 [2024-12-05 02:59:42.563354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:15:12.042 [2024-12-05 02:59:42.719111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.042 [2024-12-05 02:59:42.834387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.304 Running I/O for 5 seconds... 00:15:14.634 39933.00 IOPS, 155.99 MiB/s [2024-12-05T02:59:46.478Z] 39343.50 IOPS, 153.69 MiB/s [2024-12-05T02:59:47.449Z] 38602.33 IOPS, 150.79 MiB/s [2024-12-05T02:59:48.396Z] 38583.75 IOPS, 150.72 MiB/s [2024-12-05T02:59:48.396Z] 38393.40 IOPS, 149.97 MiB/s 00:15:17.552 Latency(us) 00:15:17.552 [2024-12-05T02:59:48.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.552 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:17.552 xnvme_bdev : 5.01 38359.65 149.84 0.00 0.00 1664.30 825.50 4461.49 00:15:17.552 [2024-12-05T02:59:48.396Z] =================================================================================================================== 00:15:17.552 [2024-12-05T02:59:48.396Z] Total : 38359.65 149.84 0.00 0.00 1664.30 825.50 4461.49 00:15:18.125 02:59:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:18.125 02:59:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:18.125 02:59:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:18.125 02:59:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:18.125 02:59:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:18.125 { 00:15:18.125 "subsystems": [ 00:15:18.125 { 00:15:18.125 "subsystem": "bdev", 00:15:18.125 "config": [ 00:15:18.125 { 00:15:18.125 "params": { 00:15:18.125 "io_mechanism": "io_uring_cmd", 00:15:18.125 "conserve_cpu": true, 00:15:18.125 "filename": "/dev/ng0n1", 00:15:18.125 "name": "xnvme_bdev" 00:15:18.125 }, 00:15:18.125 "method": "bdev_xnvme_create" 00:15:18.125 }, 00:15:18.125 { 00:15:18.125 "method": "bdev_wait_for_examine" 00:15:18.125 } 00:15:18.125 ] 00:15:18.125 } 00:15:18.125 ] 00:15:18.125 } 00:15:18.386 [2024-12-05 02:59:48.995524] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:18.386 [2024-12-05 02:59:48.995671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71478 ] 00:15:18.386 [2024-12-05 02:59:49.162158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.648 [2024-12-05 02:59:49.283765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.909 Running I/O for 5 seconds... 00:15:20.795 39397.00 IOPS, 153.89 MiB/s [2024-12-05T02:59:52.583Z] 39412.50 IOPS, 153.96 MiB/s [2024-12-05T02:59:53.969Z] 39371.00 IOPS, 153.79 MiB/s [2024-12-05T02:59:54.909Z] 39376.25 IOPS, 153.81 MiB/s 00:15:24.065 Latency(us) 00:15:24.065 [2024-12-05T02:59:54.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.065 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:24.065 xnvme_bdev : 5.00 39508.69 154.33 0.00 0.00 1615.10 567.14 6099.89 00:15:24.065 [2024-12-05T02:59:54.909Z] =================================================================================================================== 00:15:24.065 [2024-12-05T02:59:54.909Z] Total : 39508.69 154.33 0.00 0.00 1615.10 567.14 6099.89 00:15:24.636 02:59:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.636 02:59:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:24.636 02:59:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:24.636 02:59:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:24.636 02:59:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.636 { 00:15:24.636 "subsystems": [ 00:15:24.636 { 00:15:24.636 "subsystem": "bdev", 00:15:24.636 "config": [ 00:15:24.636 { 00:15:24.636 "params": { 00:15:24.636 "io_mechanism": "io_uring_cmd", 00:15:24.636 "conserve_cpu": true, 00:15:24.636 "filename": "/dev/ng0n1", 00:15:24.636 "name": "xnvme_bdev" 00:15:24.636 }, 00:15:24.636 "method": "bdev_xnvme_create" 00:15:24.636 }, 00:15:24.636 { 00:15:24.636 "method": "bdev_wait_for_examine" 00:15:24.636 } 00:15:24.636 ] 00:15:24.636 } 00:15:24.636 ] 00:15:24.636 } 00:15:24.636 [2024-12-05 02:59:55.453691] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:24.636 [2024-12-05 02:59:55.454366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71548 ] 00:15:24.895 [2024-12-05 02:59:55.616592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.153 [2024-12-05 02:59:55.739260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.413 Running I/O for 5 seconds... 00:15:27.298 79296.00 IOPS, 309.75 MiB/s [2024-12-05T02:59:59.086Z] 77920.00 IOPS, 304.38 MiB/s [2024-12-05T03:00:00.032Z] 77525.33 IOPS, 302.83 MiB/s [2024-12-05T03:00:01.420Z] 77632.00 IOPS, 303.25 MiB/s 00:15:30.576 Latency(us) 00:15:30.576 [2024-12-05T03:00:01.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.576 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:30.576 xnvme_bdev : 5.00 79132.78 309.11 0.00 0.00 805.32 354.46 2734.87 00:15:30.576 [2024-12-05T03:00:01.420Z] =================================================================================================================== 00:15:30.576 [2024-12-05T03:00:01.420Z] Total : 79132.78 309.11 0.00 0.00 805.32 354.46 2734.87 00:15:31.147 03:00:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.147 03:00:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:31.147 03:00:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:31.147 03:00:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:31.147 03:00:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:31.147 { 00:15:31.147 "subsystems": [ 00:15:31.147 { 00:15:31.147 "subsystem": "bdev", 00:15:31.147 "config": [ 00:15:31.147 { 00:15:31.147 "params": { 00:15:31.147 "io_mechanism": "io_uring_cmd", 00:15:31.147 "conserve_cpu": true, 00:15:31.147 "filename": "/dev/ng0n1", 00:15:31.147 "name": "xnvme_bdev" 00:15:31.147 }, 00:15:31.147 "method": "bdev_xnvme_create" 00:15:31.147 }, 00:15:31.147 { 00:15:31.147 "method": "bdev_wait_for_examine" 00:15:31.147 } 00:15:31.147 ] 00:15:31.147 } 00:15:31.147 ] 00:15:31.147 } 00:15:31.147 [2024-12-05 03:00:01.910597] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:31.147 [2024-12-05 03:00:01.910739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71628 ] 00:15:31.408 [2024-12-05 03:00:02.075220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.408 [2024-12-05 03:00:02.194312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.668 Running I/O for 5 seconds... 00:15:33.997 41040.00 IOPS, 160.31 MiB/s [2024-12-05T03:00:05.784Z] 39476.50 IOPS, 154.21 MiB/s [2024-12-05T03:00:06.722Z] 38981.33 IOPS, 152.27 MiB/s [2024-12-05T03:00:07.663Z] 38890.25 IOPS, 151.92 MiB/s [2024-12-05T03:00:07.663Z] 38854.60 IOPS, 151.78 MiB/s 00:15:36.819 Latency(us) 00:15:36.819 [2024-12-05T03:00:07.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.819 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:36.819 xnvme_bdev : 5.01 38786.90 151.51 0.00 0.00 1644.03 61.83 22685.54 00:15:36.819 [2024-12-05T03:00:07.663Z] =================================================================================================================== 00:15:36.819 [2024-12-05T03:00:07.663Z] Total : 38786.90 151.51 0.00 0.00 1644.03 61.83 22685.54 00:15:37.762 00:15:37.762 real 0m25.804s 00:15:37.762 user 0m15.815s 00:15:37.762 sys 0m7.719s 00:15:37.762 03:00:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.762 03:00:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.762 ************************************ 00:15:37.762 END TEST xnvme_bdevperf 00:15:37.762 ************************************ 00:15:37.762 03:00:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:37.762 03:00:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.762 03:00:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.762 03:00:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.762 ************************************ 00:15:37.762 START TEST xnvme_fio_plugin 00:15:37.762 ************************************ 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.762 03:00:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.762 { 00:15:37.762 "subsystems": [ 00:15:37.762 { 00:15:37.762 "subsystem": "bdev", 00:15:37.762 "config": [ 00:15:37.762 { 00:15:37.762 "params": { 00:15:37.762 "io_mechanism": "io_uring_cmd", 00:15:37.762 "conserve_cpu": true, 00:15:37.762 "filename": "/dev/ng0n1", 00:15:37.762 "name": "xnvme_bdev" 00:15:37.762 }, 00:15:37.762 "method": "bdev_xnvme_create" 00:15:37.762 }, 00:15:37.762 { 00:15:37.762 "method": "bdev_wait_for_examine" 00:15:37.762 } 00:15:37.762 ] 00:15:37.762 } 00:15:37.762 ] 00:15:37.762 } 00:15:37.762 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:37.762 fio-3.35 00:15:37.762 Starting 1 thread 00:15:44.354 00:15:44.354 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71746: Thu Dec 5 03:00:14 2024 00:15:44.354 read: IOPS=38.6k, BW=151MiB/s (158MB/s)(754MiB/5002msec) 00:15:44.354 slat (usec): min=2, max=109, avg= 3.70, stdev= 1.88 00:15:44.354 clat (usec): min=819, max=4671, avg=1509.21, stdev=247.73 00:15:44.354 lat (usec): min=822, max=4674, avg=1512.91, stdev=248.18 00:15:44.354 clat percentiles (usec): 00:15:44.354 | 1.00th=[ 1020], 5.00th=[ 1123], 10.00th=[ 1205], 20.00th=[ 1319], 00:15:44.354 | 30.00th=[ 1385], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1549], 00:15:44.354 | 70.00th=[ 1614], 80.00th=[ 1680], 90.00th=[ 1811], 95.00th=[ 1942], 00:15:44.354 | 99.00th=[ 2212], 99.50th=[ 2311], 99.90th=[ 2638], 99.95th=[ 3195], 00:15:44.354 | 99.99th=[ 3720] 00:15:44.354 bw ( KiB/s): min=143360, max=180224, per=100.00%, avg=155845.33, stdev=12297.13, samples=9 00:15:44.354 iops : min=35840, max=45056, avg=38961.33, stdev=3074.28, samples=9 00:15:44.354 lat (usec) : 1000=0.63% 00:15:44.354 lat (msec) : 2=95.71%, 4=3.66%, 10=0.01% 00:15:44.354 cpu : usr=50.79%, sys=45.89%, ctx=19, majf=0, minf=762 00:15:44.354 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:44.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.354 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:44.354 issued rwts: total=192926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:44.354 00:15:44.354 Run status group 0 (all jobs): 00:15:44.354 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=754MiB (790MB), run=5002-5002msec 00:15:44.615 ----------------------------------------------------- 00:15:44.615 Suppressions used: 00:15:44.615 count bytes template 00:15:44.615 1 11 /usr/src/fio/parse.c 00:15:44.615 1 8 libtcmalloc_minimal.so 00:15:44.615 1 904 libcrypto.so 00:15:44.615 ----------------------------------------------------- 00:15:44.615 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:44.615 03:00:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.615 { 00:15:44.615 "subsystems": [ 00:15:44.615 { 00:15:44.615 "subsystem": "bdev", 00:15:44.615 "config": [ 00:15:44.615 { 00:15:44.615 "params": { 00:15:44.615 "io_mechanism": "io_uring_cmd", 00:15:44.615 "conserve_cpu": true, 00:15:44.615 "filename": "/dev/ng0n1", 00:15:44.615 "name": "xnvme_bdev" 00:15:44.615 }, 00:15:44.615 "method": "bdev_xnvme_create" 00:15:44.615 }, 00:15:44.615 { 00:15:44.615 "method": "bdev_wait_for_examine" 00:15:44.615 } 00:15:44.615 ] 00:15:44.615 } 00:15:44.615 ] 00:15:44.615 } 00:15:44.874 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:44.874 fio-3.35 00:15:44.874 Starting 1 thread 00:15:51.461 00:15:51.461 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71837: Thu Dec 5 03:00:21 2024 00:15:51.461 write: IOPS=38.9k, BW=152MiB/s (160MB/s)(764MiB/5019msec); 0 zone resets 00:15:51.461 slat (usec): min=2, max=438, avg= 4.31, stdev= 2.71 00:15:51.461 clat (usec): min=117, max=40219, avg=1473.25, stdev=690.73 00:15:51.461 lat (usec): min=121, max=40223, avg=1477.56, stdev=690.89 00:15:51.461 clat percentiles (usec): 00:15:51.461 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1221], 20.00th=[ 1287], 00:15:51.461 | 30.00th=[ 1336], 40.00th=[ 1385], 50.00th=[ 1418], 60.00th=[ 1467], 00:15:51.461 | 70.00th=[ 1516], 80.00th=[ 1598], 90.00th=[ 1713], 95.00th=[ 1844], 00:15:51.461 | 99.00th=[ 2180], 99.50th=[ 2376], 99.90th=[11863], 99.95th=[20055], 00:15:51.461 | 99.99th=[33817] 00:15:51.461 bw ( KiB/s): min=136664, max=162600, per=100.00%, avg=156314.40, stdev=7493.92, samples=10 00:15:51.461 iops : min=34166, max=40650, avg=39078.60, stdev=1873.48, samples=10 00:15:51.461 lat (usec) : 250=0.01%, 500=0.03%, 750=0.07%, 1000=0.29% 00:15:51.461 lat (msec) : 2=97.36%, 4=2.12%, 10=0.02%, 20=0.06%, 50=0.05% 00:15:51.461 cpu : usr=43.84%, sys=51.39%, ctx=20, majf=0, minf=763 00:15:51.461 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=24.9%, 32=50.4%, >=64=1.7% 00:15:51.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.461 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:51.461 issued rwts: total=0,195456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:51.461 00:15:51.461 Run status group 0 (all jobs): 00:15:51.461 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=764MiB (801MB), run=5019-5019msec 00:15:51.461 ----------------------------------------------------- 00:15:51.461 Suppressions used: 00:15:51.461 count bytes template 00:15:51.461 1 11 /usr/src/fio/parse.c 00:15:51.461 1 8 libtcmalloc_minimal.so 00:15:51.461 1 904 libcrypto.so 00:15:51.461 ----------------------------------------------------- 00:15:51.461 00:15:51.461 00:15:51.461 real 0m13.836s 00:15:51.461 user 0m7.611s 00:15:51.461 sys 0m5.500s 00:15:51.461 ************************************ 00:15:51.461 END TEST xnvme_fio_plugin 00:15:51.461 ************************************ 00:15:51.461 03:00:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.461 03:00:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:51.461 Process with pid 71331 is not found 00:15:51.461 03:00:22 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71331 00:15:51.461 03:00:22 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71331 ']' 00:15:51.461 03:00:22 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71331 00:15:51.461 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71331) - No such process 00:15:51.461 03:00:22 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71331 is not found' 00:15:51.461 ************************************ 00:15:51.461 END TEST nvme_xnvme 00:15:51.461 ************************************ 00:15:51.461 00:15:51.461 real 3m32.814s 00:15:51.461 user 1m56.028s 00:15:51.461 sys 1m22.235s 00:15:51.461 03:00:22 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.461 03:00:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.461 03:00:22 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:51.461 03:00:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.461 03:00:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.461 03:00:22 -- common/autotest_common.sh@10 -- # set +x 00:15:51.723 ************************************ 00:15:51.723 START TEST blockdev_xnvme 00:15:51.723 ************************************ 00:15:51.723 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:51.723 * Looking for test storage... 00:15:51.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:51.723 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:51.723 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:51.723 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:51.723 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.723 03:00:22 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.724 03:00:22 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:51.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.724 --rc genhtml_branch_coverage=1 00:15:51.724 --rc genhtml_function_coverage=1 00:15:51.724 --rc genhtml_legend=1 00:15:51.724 --rc geninfo_all_blocks=1 00:15:51.724 --rc geninfo_unexecuted_blocks=1 00:15:51.724 00:15:51.724 ' 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:51.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.724 --rc genhtml_branch_coverage=1 00:15:51.724 --rc genhtml_function_coverage=1 00:15:51.724 --rc genhtml_legend=1 00:15:51.724 --rc geninfo_all_blocks=1 00:15:51.724 --rc geninfo_unexecuted_blocks=1 00:15:51.724 00:15:51.724 ' 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:51.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.724 --rc genhtml_branch_coverage=1 00:15:51.724 --rc genhtml_function_coverage=1 00:15:51.724 --rc genhtml_legend=1 00:15:51.724 --rc geninfo_all_blocks=1 00:15:51.724 --rc geninfo_unexecuted_blocks=1 00:15:51.724 00:15:51.724 ' 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:51.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.724 --rc genhtml_branch_coverage=1 00:15:51.724 --rc genhtml_function_coverage=1 00:15:51.724 --rc genhtml_legend=1 00:15:51.724 --rc geninfo_all_blocks=1 00:15:51.724 --rc geninfo_unexecuted_blocks=1 00:15:51.724 00:15:51.724 ' 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71971 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71971 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71971 ']' 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.724 03:00:22 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:51.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.724 03:00:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.986 [2024-12-05 03:00:22.585462] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:51.986 [2024-12-05 03:00:22.585860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71971 ] 00:15:51.986 [2024-12-05 03:00:22.754231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.281 [2024-12-05 03:00:22.874808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.894 03:00:23 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.894 03:00:23 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:52.894 03:00:23 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:52.894 03:00:23 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:52.894 03:00:23 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:52.894 03:00:23 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:52.894 03:00:23 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:53.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:53.729 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:53.991 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:53.991 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:53.991 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:53.991 03:00:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.991 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:53.992 nvme0n1 00:15:53.992 nvme0n2 00:15:53.992 nvme0n3 00:15:53.992 nvme1n1 00:15:53.992 nvme2n1 00:15:53.992 nvme3n1 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:53.992 03:00:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:53.992 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3ea8f001-5e5c-4bed-ad8b-e1148724519b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3ea8f001-5e5c-4bed-ad8b-e1148724519b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "99c5b8ea-94a4-41d6-a016-71e15d413327"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99c5b8ea-94a4-41d6-a016-71e15d413327",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "4afdf69a-a5b3-4cb9-bf26-1c8e308b84f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4afdf69a-a5b3-4cb9-bf26-1c8e308b84f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d20e475e-c11d-425b-b3eb-e297ae8cc8ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d20e475e-c11d-425b-b3eb-e297ae8cc8ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5f2c9100-8b17-4859-8d99-eb8ca191c068"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5f2c9100-8b17-4859-8d99-eb8ca191c068",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0e3c4531-0d9e-479f-9c08-8f73c5133af3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0e3c4531-0d9e-479f-9c08-8f73c5133af3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:54.253 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:54.253 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:54.253 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:54.253 03:00:24 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71971 00:15:54.253 03:00:24 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71971 ']' 00:15:54.253 03:00:24 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71971 00:15:54.253 03:00:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:54.253 03:00:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.254 03:00:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71971 00:15:54.254 killing process with pid 71971 00:15:54.254 03:00:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.254 03:00:24 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.254 03:00:24 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71971' 00:15:54.254 03:00:24 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71971 00:15:54.254 03:00:24 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71971 00:15:56.169 03:00:26 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:56.169 03:00:26 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:56.169 03:00:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:56.169 03:00:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.169 03:00:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.169 ************************************ 00:15:56.169 START TEST bdev_hello_world 00:15:56.169 ************************************ 00:15:56.169 03:00:26 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:56.169 [2024-12-05 03:00:26.796723] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:56.169 [2024-12-05 03:00:26.797207] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72254 ] 00:15:56.169 [2024-12-05 03:00:26.965812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.461 [2024-12-05 03:00:27.108259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.722 [2024-12-05 03:00:27.555965] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:56.722 [2024-12-05 03:00:27.556285] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:56.722 [2024-12-05 03:00:27.556316] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:56.722 [2024-12-05 03:00:27.558666] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:56.722 [2024-12-05 03:00:27.559335] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:56.722 [2024-12-05 03:00:27.559364] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:56.722 [2024-12-05 03:00:27.559957] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:56.722 00:15:56.722 [2024-12-05 03:00:27.560047] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:57.664 00:15:57.664 real 0m1.698s 00:15:57.664 user 0m1.261s 00:15:57.664 sys 0m0.284s 00:15:57.664 03:00:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.664 03:00:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:57.664 ************************************ 00:15:57.664 END TEST bdev_hello_world 00:15:57.665 ************************************ 00:15:57.665 03:00:28 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:57.665 03:00:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.665 03:00:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.665 03:00:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.665 ************************************ 00:15:57.665 START TEST bdev_bounds 00:15:57.665 ************************************ 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:57.665 Process bdevio pid: 72292 00:15:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72292 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72292' 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72292 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72292 ']' 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.665 03:00:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:57.926 [2024-12-05 03:00:28.567573] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:57.926 [2024-12-05 03:00:28.567742] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72292 ] 00:15:57.926 [2024-12-05 03:00:28.736459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:58.187 [2024-12-05 03:00:28.883704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.187 [2024-12-05 03:00:28.884419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.187 [2024-12-05 03:00:28.884525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.760 03:00:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.760 03:00:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:58.760 03:00:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:58.760 I/O targets: 00:15:58.760 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:58.760 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:58.760 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:58.760 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:58.760 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:58.760 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:58.760 00:15:58.760 00:15:58.760 CUnit - A unit testing framework for C - Version 2.1-3 00:15:58.760 http://cunit.sourceforge.net/ 00:15:58.760 00:15:58.760 00:15:58.760 Suite: bdevio tests on: nvme3n1 00:15:58.760 Test: blockdev write read block ...passed 00:15:58.760 Test: blockdev write zeroes read block ...passed 00:15:58.760 Test: blockdev write zeroes read no split ...passed 00:15:58.760 Test: blockdev write zeroes read split ...passed 00:15:58.760 Test: blockdev write zeroes read split partial ...passed 00:15:58.760 Test: blockdev reset ...passed 00:15:58.760 Test: blockdev write read 8 blocks ...passed 00:15:58.760 Test: blockdev write read size > 128k ...passed 00:15:58.760 Test: blockdev write read invalid size ...passed 00:15:58.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.760 Test: blockdev write read max offset ...passed 00:15:58.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.760 Test: blockdev writev readv 8 blocks ...passed 00:15:58.760 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.760 Test: blockdev writev readv block ...passed 00:15:58.760 Test: blockdev writev readv size > 128k ...passed 00:15:58.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.760 Test: blockdev comparev and writev ...passed 00:15:58.760 Test: blockdev nvme passthru rw ...passed 00:15:58.760 Test: blockdev nvme passthru vendor specific ...passed 00:15:58.760 Test: blockdev nvme admin passthru ...passed 00:15:58.760 Test: blockdev copy ...passed 00:15:58.760 Suite: bdevio tests on: nvme2n1 00:15:58.760 Test: blockdev write read block ...passed 00:15:58.760 Test: blockdev write zeroes read block ...passed 00:15:58.760 Test: blockdev write zeroes read no split ...passed 00:15:59.022 Test: blockdev write zeroes read split ...passed 00:15:59.022 Test: blockdev write zeroes read split partial ...passed 00:15:59.022 Test: blockdev reset ...passed 00:15:59.022 Test: blockdev write read 8 blocks ...passed 00:15:59.022 Test: blockdev write read size > 128k ...passed 00:15:59.022 Test: blockdev write read invalid size ...passed 00:15:59.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:59.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:59.022 Test: blockdev write read max offset ...passed 00:15:59.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:59.022 Test: blockdev writev readv 8 blocks ...passed 00:15:59.022 Test: blockdev writev readv 30 x 1block ...passed 00:15:59.022 Test: blockdev writev readv block ...passed 00:15:59.022 Test: blockdev writev readv size > 128k ...passed 00:15:59.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:59.022 Test: blockdev comparev and writev ...passed 00:15:59.022 Test: blockdev nvme passthru rw ...passed 00:15:59.022 Test: blockdev nvme passthru vendor specific ...passed 00:15:59.022 Test: blockdev nvme admin passthru ...passed 00:15:59.022 Test: blockdev copy ...passed 00:15:59.022 Suite: bdevio tests on: nvme1n1 00:15:59.022 Test: blockdev write read block ...passed 00:15:59.022 Test: blockdev write zeroes read block ...passed 00:15:59.022 Test: blockdev write zeroes read no split ...passed 00:15:59.022 Test: blockdev write zeroes read split ...passed 00:15:59.022 Test: blockdev write zeroes read split partial ...passed 00:15:59.022 Test: blockdev reset ...passed 00:15:59.022 Test: blockdev write read 8 blocks ...passed 00:15:59.022 Test: blockdev write read size > 128k ...passed 00:15:59.022 Test: blockdev write read invalid size ...passed 00:15:59.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:59.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:59.022 Test: blockdev write read max offset ...passed 00:15:59.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:59.022 Test: blockdev writev readv 8 blocks ...passed 00:15:59.022 Test: blockdev writev readv 30 x 1block ...passed 00:15:59.022 Test: blockdev writev readv block ...passed 00:15:59.022 Test: blockdev writev readv size > 128k ...passed 00:15:59.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:59.022 Test: blockdev comparev and writev ...passed 00:15:59.022 Test: blockdev nvme passthru rw ...passed 00:15:59.022 Test: blockdev nvme passthru vendor specific ...passed 00:15:59.022 Test: blockdev nvme admin passthru ...passed 00:15:59.022 Test: blockdev copy ...passed 00:15:59.022 Suite: bdevio tests on: nvme0n3 00:15:59.022 Test: blockdev write read block ...passed 00:15:59.022 Test: blockdev write zeroes read block ...passed 00:15:59.022 Test: blockdev write zeroes read no split ...passed 00:15:59.022 Test: blockdev write zeroes read split ...passed 00:15:59.022 Test: blockdev write zeroes read split partial ...passed 00:15:59.022 Test: blockdev reset ...passed 00:15:59.022 Test: blockdev write read 8 blocks ...passed 00:15:59.022 Test: blockdev write read size > 128k ...passed 00:15:59.022 Test: blockdev write read invalid size ...passed 00:15:59.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:59.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:59.022 Test: blockdev write read max offset ...passed 00:15:59.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:59.022 Test: blockdev writev readv 8 blocks ...passed 00:15:59.022 Test: blockdev writev readv 30 x 1block ...passed 00:15:59.022 Test: blockdev writev readv block ...passed 00:15:59.022 Test: blockdev writev readv size > 128k ...passed 00:15:59.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:59.022 Test: blockdev comparev and writev ...passed 00:15:59.022 Test: blockdev nvme passthru rw ...passed 00:15:59.022 Test: blockdev nvme passthru vendor specific ...passed 00:15:59.022 Test: blockdev nvme admin passthru ...passed 00:15:59.022 Test: blockdev copy ...passed 00:15:59.022 Suite: bdevio tests on: nvme0n2 00:15:59.022 Test: blockdev write read block ...passed 00:15:59.022 Test: blockdev write zeroes read block ...passed 00:15:59.022 Test: blockdev write zeroes read no split ...passed 00:15:59.284 Test: blockdev write zeroes read split ...passed 00:15:59.284 Test: blockdev write zeroes read split partial ...passed 00:15:59.284 Test: blockdev reset ...passed 00:15:59.284 Test: blockdev write read 8 blocks ...passed 00:15:59.284 Test: blockdev write read size > 128k ...passed 00:15:59.284 Test: blockdev write read invalid size ...passed 00:15:59.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:59.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:59.284 Test: blockdev write read max offset ...passed 00:15:59.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:59.284 Test: blockdev writev readv 8 blocks ...passed 00:15:59.284 Test: blockdev writev readv 30 x 1block ...passed 00:15:59.284 Test: blockdev writev readv block ...passed 00:15:59.284 Test: blockdev writev readv size > 128k ...passed 00:15:59.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:59.284 Test: blockdev comparev and writev ...passed 00:15:59.284 Test: blockdev nvme passthru rw ...passed 00:15:59.284 Test: blockdev nvme passthru vendor specific ...passed 00:15:59.284 Test: blockdev nvme admin passthru ...passed 00:15:59.284 Test: blockdev copy ...passed 00:15:59.284 Suite: bdevio tests on: nvme0n1 00:15:59.284 Test: blockdev write read block ...passed 00:15:59.284 Test: blockdev write zeroes read block ...passed 00:15:59.284 Test: blockdev write zeroes read no split ...passed 00:15:59.284 Test: blockdev write zeroes read split ...passed 00:15:59.284 Test: blockdev write zeroes read split partial ...passed 00:15:59.284 Test: blockdev reset ...passed 00:15:59.284 Test: blockdev write read 8 blocks ...passed 00:15:59.284 Test: blockdev write read size > 128k ...passed 00:15:59.284 Test: blockdev write read invalid size ...passed 00:15:59.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:59.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:59.284 Test: blockdev write read max offset ...passed 00:15:59.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:59.284 Test: blockdev writev readv 8 blocks ...passed 00:15:59.284 Test: blockdev writev readv 30 x 1block ...passed 00:15:59.284 Test: blockdev writev readv block ...passed 00:15:59.284 Test: blockdev writev readv size > 128k ...passed 00:15:59.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:59.284 Test: blockdev comparev and writev ...passed 00:15:59.284 Test: blockdev nvme passthru rw ...passed 00:15:59.284 Test: blockdev nvme passthru vendor specific ...passed 00:15:59.284 Test: blockdev nvme admin passthru ...passed 00:15:59.284 Test: blockdev copy ...passed 00:15:59.284 00:15:59.284 Run Summary: Type Total Ran Passed Failed Inactive 00:15:59.284 suites 6 6 n/a 0 0 00:15:59.284 tests 138 138 138 0 0 00:15:59.284 asserts 780 780 780 0 n/a 00:15:59.284 00:15:59.284 Elapsed time = 1.260 seconds 00:15:59.284 0 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72292 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72292 ']' 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72292 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72292 00:15:59.284 killing process with pid 72292 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72292' 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72292 00:15:59.284 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72292 00:16:00.230 ************************************ 00:16:00.230 END TEST bdev_bounds 00:16:00.230 ************************************ 00:16:00.230 03:00:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:00.230 00:16:00.230 real 0m2.428s 00:16:00.230 user 0m5.762s 00:16:00.230 sys 0m0.406s 00:16:00.230 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.230 03:00:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 03:00:30 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:00.230 03:00:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:00.230 03:00:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.230 03:00:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 ************************************ 00:16:00.230 START TEST bdev_nbd 00:16:00.230 ************************************ 00:16:00.230 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:00.230 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:00.230 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:00.230 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.230 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:00.230 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72346 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72346 /var/tmp/spdk-nbd.sock 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72346 ']' 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:00.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.231 03:00:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:00.231 [2024-12-05 03:00:31.061100] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:00.231 [2024-12-05 03:00:31.061223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.490 [2024-12-05 03:00:31.218103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.490 [2024-12-05 03:00:31.306222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.058 03:00:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.058 03:00:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:01.058 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:01.058 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.058 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.317 03:00:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.317 1+0 records in 00:16:01.317 1+0 records out 00:16:01.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324816 s, 12.6 MB/s 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.317 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.575 1+0 records in 00:16:01.575 1+0 records out 00:16:01.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746997 s, 5.5 MB/s 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.575 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.576 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.576 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.576 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.576 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.834 1+0 records in 00:16:01.834 1+0 records out 00:16:01.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000981902 s, 4.2 MB/s 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.834 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.093 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.094 1+0 records in 00:16:02.094 1+0 records out 00:16:02.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760291 s, 5.4 MB/s 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:02.094 03:00:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.352 1+0 records in 00:16:02.352 1+0 records out 00:16:02.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012667 s, 3.2 MB/s 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:02.352 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.611 1+0 records in 00:16:02.611 1+0 records out 00:16:02.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082487 s, 5.0 MB/s 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:02.611 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd0", 00:16:02.870 "bdev_name": "nvme0n1" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd1", 00:16:02.870 "bdev_name": "nvme0n2" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd2", 00:16:02.870 "bdev_name": "nvme0n3" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd3", 00:16:02.870 "bdev_name": "nvme1n1" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd4", 00:16:02.870 "bdev_name": "nvme2n1" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd5", 00:16:02.870 "bdev_name": "nvme3n1" 00:16:02.870 } 00:16:02.870 ]' 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd0", 00:16:02.870 "bdev_name": "nvme0n1" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd1", 00:16:02.870 "bdev_name": "nvme0n2" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd2", 00:16:02.870 "bdev_name": "nvme0n3" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd3", 00:16:02.870 "bdev_name": "nvme1n1" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd4", 00:16:02.870 "bdev_name": "nvme2n1" 00:16:02.870 }, 00:16:02.870 { 00:16:02.870 "nbd_device": "/dev/nbd5", 00:16:02.870 "bdev_name": "nvme3n1" 00:16:02.870 } 00:16:02.870 ]' 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.870 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.129 03:00:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.387 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.388 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.388 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.646 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.905 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:04.163 03:00:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:04.421 /dev/nbd0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.421 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.678 1+0 records in 00:16:04.678 1+0 records out 00:16:04.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879055 s, 4.7 MB/s 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:04.678 /dev/nbd1 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.678 1+0 records in 00:16:04.678 1+0 records out 00:16:04.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831507 s, 4.9 MB/s 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.678 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:04.936 /dev/nbd10 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.936 1+0 records in 00:16:04.936 1+0 records out 00:16:04.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100884 s, 4.1 MB/s 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.936 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:05.194 /dev/nbd11 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.194 1+0 records in 00:16:05.194 1+0 records out 00:16:05.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101341 s, 4.0 MB/s 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:05.194 03:00:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:05.452 /dev/nbd12 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.452 1+0 records in 00:16:05.452 1+0 records out 00:16:05.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128199 s, 3.2 MB/s 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:05.452 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:05.711 /dev/nbd13 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.711 1+0 records in 00:16:05.711 1+0 records out 00:16:05.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903931 s, 4.5 MB/s 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.711 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd0", 00:16:05.971 "bdev_name": "nvme0n1" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd1", 00:16:05.971 "bdev_name": "nvme0n2" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd10", 00:16:05.971 "bdev_name": "nvme0n3" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd11", 00:16:05.971 "bdev_name": "nvme1n1" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd12", 00:16:05.971 "bdev_name": "nvme2n1" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd13", 00:16:05.971 "bdev_name": "nvme3n1" 00:16:05.971 } 00:16:05.971 ]' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd0", 00:16:05.971 "bdev_name": "nvme0n1" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd1", 00:16:05.971 "bdev_name": "nvme0n2" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd10", 00:16:05.971 "bdev_name": "nvme0n3" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd11", 00:16:05.971 "bdev_name": "nvme1n1" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd12", 00:16:05.971 "bdev_name": "nvme2n1" 00:16:05.971 }, 00:16:05.971 { 00:16:05.971 "nbd_device": "/dev/nbd13", 00:16:05.971 "bdev_name": "nvme3n1" 00:16:05.971 } 00:16:05.971 ]' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:05.971 /dev/nbd1 00:16:05.971 /dev/nbd10 00:16:05.971 /dev/nbd11 00:16:05.971 /dev/nbd12 00:16:05.971 /dev/nbd13' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:05.971 /dev/nbd1 00:16:05.971 /dev/nbd10 00:16:05.971 /dev/nbd11 00:16:05.971 /dev/nbd12 00:16:05.971 /dev/nbd13' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:05.971 256+0 records in 00:16:05.971 256+0 records out 00:16:05.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041484 s, 253 MB/s 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:05.971 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:06.232 256+0 records in 00:16:06.232 256+0 records out 00:16:06.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207083 s, 5.1 MB/s 00:16:06.232 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:06.232 03:00:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:06.494 256+0 records in 00:16:06.494 256+0 records out 00:16:06.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227778 s, 4.6 MB/s 00:16:06.494 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:06.494 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:06.756 256+0 records in 00:16:06.756 256+0 records out 00:16:06.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243629 s, 4.3 MB/s 00:16:06.756 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:06.756 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:06.756 256+0 records in 00:16:06.756 256+0 records out 00:16:06.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211276 s, 5.0 MB/s 00:16:06.756 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:06.756 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:07.328 256+0 records in 00:16:07.328 256+0 records out 00:16:07.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.313978 s, 3.3 MB/s 00:16:07.328 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:07.328 03:00:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:07.328 256+0 records in 00:16:07.328 256+0 records out 00:16:07.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245495 s, 4.3 MB/s 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.328 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.586 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.843 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.100 03:00:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.357 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:08.614 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:08.614 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:08.614 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:08.614 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.614 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.615 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:08.872 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:09.130 malloc_lvol_verify 00:16:09.130 03:00:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:09.386 ec384cbc-b21c-4749-86e4-1d4ef59172f7 00:16:09.387 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:09.644 e38e19be-b9e9-404b-ae5e-3905cfe1fd4f 00:16:09.644 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:09.902 /dev/nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:09.902 mke2fs 1.47.0 (5-Feb-2023) 00:16:09.902 Discarding device blocks: 0/4096 done 00:16:09.902 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:09.902 00:16:09.902 Allocating group tables: 0/1 done 00:16:09.902 Writing inode tables: 0/1 done 00:16:09.902 Creating journal (1024 blocks): done 00:16:09.902 Writing superblocks and filesystem accounting information: 0/1 done 00:16:09.902 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72346 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72346 ']' 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72346 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.902 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72346 00:16:10.161 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.161 killing process with pid 72346 00:16:10.161 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.161 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72346' 00:16:10.161 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72346 00:16:10.161 03:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72346 00:16:10.730 03:00:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:10.730 00:16:10.730 real 0m10.409s 00:16:10.730 user 0m13.963s 00:16:10.730 sys 0m3.583s 00:16:10.730 ************************************ 00:16:10.730 END TEST bdev_nbd 00:16:10.730 ************************************ 00:16:10.730 03:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.730 03:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:10.730 03:00:41 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:10.730 03:00:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:10.730 03:00:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:10.730 03:00:41 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:10.730 03:00:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:10.730 03:00:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.730 03:00:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.730 ************************************ 00:16:10.730 START TEST bdev_fio 00:16:10.730 ************************************ 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:10.730 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:10.730 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:10.731 ************************************ 00:16:10.731 START TEST bdev_fio_rw_verify 00:16:10.731 ************************************ 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:10.731 03:00:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:10.991 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.991 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.991 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.991 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.991 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.991 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.991 fio-3.35 00:16:10.991 Starting 6 threads 00:16:23.219 00:16:23.219 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72751: Thu Dec 5 03:00:52 2024 00:16:23.219 read: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(573MiB/10004msec) 00:16:23.219 slat (usec): min=2, max=1753, avg= 6.77, stdev=13.03 00:16:23.219 clat (usec): min=90, max=8575, avg=1335.60, stdev=707.78 00:16:23.219 lat (usec): min=98, max=8603, avg=1342.37, stdev=708.34 00:16:23.219 clat percentiles (usec): 00:16:23.219 | 50.000th=[ 1237], 99.000th=[ 3523], 99.900th=[ 4883], 99.990th=[ 8094], 00:16:23.219 | 99.999th=[ 8586] 00:16:23.219 write: IOPS=15.1k, BW=58.9MiB/s (61.7MB/s)(589MiB/10004msec); 0 zone resets 00:16:23.219 slat (usec): min=13, max=4765, avg=39.68, stdev=130.43 00:16:23.219 clat (usec): min=97, max=7345, avg=1563.65, stdev=748.98 00:16:23.219 lat (usec): min=115, max=7360, avg=1603.34, stdev=760.79 00:16:23.219 clat percentiles (usec): 00:16:23.219 | 50.000th=[ 1450], 99.000th=[ 3884], 99.900th=[ 5145], 99.990th=[ 6587], 00:16:23.219 | 99.999th=[ 7111] 00:16:23.219 bw ( KiB/s): min=49028, max=83541, per=100.00%, avg=60523.32, stdev=1749.97, samples=114 00:16:23.219 iops : min=12254, max=20884, avg=15130.05, stdev=437.52, samples=114 00:16:23.219 lat (usec) : 100=0.01%, 250=1.26%, 500=5.08%, 750=8.99%, 1000=12.87% 00:16:23.219 lat (msec) : 2=52.45%, 4=18.75%, 10=0.60% 00:16:23.219 cpu : usr=43.77%, sys=31.75%, ctx=5643, majf=0, minf=14912 00:16:23.219 IO depths : 1=11.4%, 2=23.9%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.219 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.219 issued rwts: total=146796,150811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.219 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:23.219 00:16:23.219 Run status group 0 (all jobs): 00:16:23.219 READ: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=573MiB (601MB), run=10004-10004msec 00:16:23.219 WRITE: bw=58.9MiB/s (61.7MB/s), 58.9MiB/s-58.9MiB/s (61.7MB/s-61.7MB/s), io=589MiB (618MB), run=10004-10004msec 00:16:23.219 ----------------------------------------------------- 00:16:23.219 Suppressions used: 00:16:23.219 count bytes template 00:16:23.219 6 48 /usr/src/fio/parse.c 00:16:23.219 3920 376320 /usr/src/fio/iolog.c 00:16:23.219 1 8 libtcmalloc_minimal.so 00:16:23.219 1 904 libcrypto.so 00:16:23.219 ----------------------------------------------------- 00:16:23.219 00:16:23.219 00:16:23.219 real 0m11.952s 00:16:23.219 user 0m27.758s 00:16:23.219 sys 0m19.379s 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:23.219 ************************************ 00:16:23.219 END TEST bdev_fio_rw_verify 00:16:23.219 ************************************ 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:23.219 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3ea8f001-5e5c-4bed-ad8b-e1148724519b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3ea8f001-5e5c-4bed-ad8b-e1148724519b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "99c5b8ea-94a4-41d6-a016-71e15d413327"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99c5b8ea-94a4-41d6-a016-71e15d413327",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "4afdf69a-a5b3-4cb9-bf26-1c8e308b84f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4afdf69a-a5b3-4cb9-bf26-1c8e308b84f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d20e475e-c11d-425b-b3eb-e297ae8cc8ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d20e475e-c11d-425b-b3eb-e297ae8cc8ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5f2c9100-8b17-4859-8d99-eb8ca191c068"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5f2c9100-8b17-4859-8d99-eb8ca191c068",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0e3c4531-0d9e-479f-9c08-8f73c5133af3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0e3c4531-0d9e-479f-9c08-8f73c5133af3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.220 /home/vagrant/spdk_repo/spdk 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:23.220 ************************************ 00:16:23.220 END TEST bdev_fio 00:16:23.220 ************************************ 00:16:23.220 00:16:23.220 real 0m12.124s 00:16:23.220 user 0m27.842s 00:16:23.220 sys 0m19.447s 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.220 03:00:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 03:00:53 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:23.220 03:00:53 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:23.220 03:00:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:23.220 03:00:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.220 03:00:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 ************************************ 00:16:23.220 START TEST bdev_verify 00:16:23.220 ************************************ 00:16:23.220 03:00:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:23.220 [2024-12-05 03:00:53.721943] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:23.220 [2024-12-05 03:00:53.722104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72924 ] 00:16:23.220 [2024-12-05 03:00:53.883235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:23.220 [2024-12-05 03:00:54.001960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.220 [2024-12-05 03:00:54.002058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.793 Running I/O for 5 seconds... 00:16:26.122 23264.00 IOPS, 90.88 MiB/s [2024-12-05T03:00:57.907Z] 23168.00 IOPS, 90.50 MiB/s [2024-12-05T03:00:58.848Z] 23274.67 IOPS, 90.92 MiB/s [2024-12-05T03:00:59.909Z] 22696.00 IOPS, 88.66 MiB/s [2024-12-05T03:00:59.909Z] 22735.00 IOPS, 88.81 MiB/s 00:16:29.065 Latency(us) 00:16:29.065 [2024-12-05T03:00:59.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.065 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.065 Verification LBA range: start 0x0 length 0x80000 00:16:29.065 nvme0n1 : 5.04 1905.68 7.44 0.00 0.00 67056.04 9578.34 72190.42 00:16:29.065 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:29.065 Verification LBA range: start 0x80000 length 0x80000 00:16:29.065 nvme0n1 : 5.07 1664.65 6.50 0.00 0.00 76739.78 6805.66 73803.62 00:16:29.065 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.065 Verification LBA range: start 0x0 length 0x80000 00:16:29.065 nvme0n2 : 5.06 1896.54 7.41 0.00 0.00 67263.88 10384.94 72997.02 00:16:29.066 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x80000 length 0x80000 00:16:29.066 nvme0n2 : 5.05 1673.84 6.54 0.00 0.00 76158.40 12905.55 75013.51 00:16:29.066 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x0 length 0x80000 00:16:29.066 nvme0n3 : 5.08 1889.12 7.38 0.00 0.00 67411.54 11544.42 65737.65 00:16:29.066 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x80000 length 0x80000 00:16:29.066 nvme0n3 : 5.06 1669.00 6.52 0.00 0.00 76196.66 13510.50 71383.83 00:16:29.066 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x0 length 0x20000 00:16:29.066 nvme1n1 : 5.09 1887.63 7.37 0.00 0.00 67361.81 9275.86 61704.66 00:16:29.066 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x20000 length 0x20000 00:16:29.066 nvme1n1 : 5.07 1666.68 6.51 0.00 0.00 76149.06 11695.66 68157.44 00:16:29.066 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x0 length 0xbd0bd 00:16:29.066 nvme2n1 : 5.09 2514.84 9.82 0.00 0.00 50456.04 7360.20 52832.10 00:16:29.066 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:29.066 nvme2n1 : 5.09 2367.26 9.25 0.00 0.00 53439.18 5091.64 61704.66 00:16:29.066 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0x0 length 0xa0000 00:16:29.066 nvme3n1 : 5.08 1839.71 7.19 0.00 0.00 68897.48 6755.25 96388.33 00:16:29.066 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:29.066 Verification LBA range: start 0xa0000 length 0xa0000 00:16:29.066 nvme3n1 : 5.09 1558.95 6.09 0.00 0.00 81047.06 6402.36 125829.12 00:16:29.066 [2024-12-05T03:00:59.910Z] =================================================================================================================== 00:16:29.066 [2024-12-05T03:00:59.910Z] Total : 22533.90 88.02 0.00 0.00 67693.38 5091.64 125829.12 00:16:30.010 00:16:30.010 real 0m6.837s 00:16:30.010 user 0m10.908s 00:16:30.010 sys 0m1.556s 00:16:30.010 03:01:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.010 ************************************ 00:16:30.010 END TEST bdev_verify 00:16:30.010 ************************************ 00:16:30.010 03:01:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 03:01:00 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:30.010 03:01:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:30.010 03:01:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.010 03:01:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.010 ************************************ 00:16:30.010 START TEST bdev_verify_big_io 00:16:30.010 ************************************ 00:16:30.010 03:01:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:30.010 [2024-12-05 03:01:00.642867] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:30.010 [2024-12-05 03:01:00.643023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73026 ] 00:16:30.010 [2024-12-05 03:01:00.811628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.272 [2024-12-05 03:01:00.957913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.272 [2024-12-05 03:01:00.958039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.844 Running I/O for 5 seconds... 00:16:36.702 2112.00 IOPS, 132.00 MiB/s [2024-12-05T03:01:08.117Z] 3304.00 IOPS, 206.50 MiB/s 00:16:37.273 Latency(us) 00:16:37.273 [2024-12-05T03:01:08.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.273 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x0 length 0x8000 00:16:37.273 nvme0n1 : 5.89 127.69 7.98 0.00 0.00 968726.66 40531.50 1051802.39 00:16:37.273 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x8000 length 0x8000 00:16:37.273 nvme0n1 : 5.95 69.90 4.37 0.00 0.00 1717463.46 39119.95 1729343.80 00:16:37.273 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x0 length 0x8000 00:16:37.273 nvme0n2 : 5.91 132.76 8.30 0.00 0.00 911196.24 6125.10 1471232.79 00:16:37.273 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x8000 length 0x8000 00:16:37.273 nvme0n2 : 5.67 90.25 5.64 0.00 0.00 1288887.06 5973.86 1387346.71 00:16:37.273 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x0 length 0x8000 00:16:37.273 nvme0n3 : 5.89 127.63 7.98 0.00 0.00 928450.98 141961.06 1574477.19 00:16:37.273 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x8000 length 0x8000 00:16:37.273 nvme0n3 : 6.02 106.39 6.65 0.00 0.00 1037098.38 59284.87 1045349.61 00:16:37.273 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x0 length 0x2000 00:16:37.273 nvme1n1 : 5.90 173.64 10.85 0.00 0.00 668061.54 102034.51 1006632.96 00:16:37.273 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x2000 length 0x2000 00:16:37.273 nvme1n1 : 6.02 106.36 6.65 0.00 0.00 981179.31 49202.41 1013085.74 00:16:37.273 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x0 length 0xbd0b 00:16:37.273 nvme2n1 : 5.80 151.85 9.49 0.00 0.00 740090.94 14720.39 1664816.05 00:16:37.273 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:37.273 nvme2n1 : 6.22 154.25 9.64 0.00 0.00 652827.10 2104.71 3690987.52 00:16:37.273 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0x0 length 0xa000 00:16:37.273 nvme3n1 : 5.90 159.88 9.99 0.00 0.00 691528.51 3163.37 1096971.82 00:16:37.273 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:37.273 Verification LBA range: start 0xa000 length 0xa000 00:16:37.273 nvme3n1 : 6.49 298.33 18.65 0.00 0.00 321846.48 376.52 1780966.01 00:16:37.273 [2024-12-05T03:01:08.117Z] =================================================================================================================== 00:16:37.273 [2024-12-05T03:01:08.117Z] Total : 1698.94 106.18 0.00 0.00 785655.75 376.52 3690987.52 00:16:38.210 00:16:38.210 real 0m8.317s 00:16:38.210 user 0m15.223s 00:16:38.210 sys 0m0.542s 00:16:38.210 03:01:08 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.210 ************************************ 00:16:38.210 03:01:08 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.210 END TEST bdev_verify_big_io 00:16:38.210 ************************************ 00:16:38.210 03:01:08 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.210 03:01:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:38.210 03:01:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.210 03:01:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.210 ************************************ 00:16:38.210 START TEST bdev_write_zeroes 00:16:38.210 ************************************ 00:16:38.210 03:01:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.210 [2024-12-05 03:01:09.007412] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:38.210 [2024-12-05 03:01:09.007641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73144 ] 00:16:38.468 [2024-12-05 03:01:09.167237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.468 [2024-12-05 03:01:09.258206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.035 Running I/O for 1 seconds... 00:16:39.974 81504.00 IOPS, 318.38 MiB/s 00:16:39.974 Latency(us) 00:16:39.974 [2024-12-05T03:01:10.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.974 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.974 nvme0n1 : 1.02 13080.54 51.10 0.00 0.00 9776.60 5343.70 17039.36 00:16:39.974 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.974 nvme0n2 : 1.02 13065.47 51.04 0.00 0.00 9781.28 5368.91 17140.18 00:16:39.974 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.974 nvme0n3 : 1.02 13048.45 50.97 0.00 0.00 9787.53 5293.29 17140.18 00:16:39.974 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.974 nvme1n1 : 1.02 13114.70 51.23 0.00 0.00 9731.47 5268.09 17140.18 00:16:39.974 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.974 nvme2n1 : 1.02 15728.77 61.44 0.00 0.00 8098.55 2936.52 16535.24 00:16:39.974 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.974 nvme3n1 : 1.02 13221.74 51.65 0.00 0.00 9610.12 4461.49 17039.36 00:16:39.974 [2024-12-05T03:01:10.818Z] =================================================================================================================== 00:16:39.974 [2024-12-05T03:01:10.818Z] Total : 81259.67 317.42 0.00 0.00 9419.07 2936.52 17140.18 00:16:40.916 00:16:40.916 real 0m2.501s 00:16:40.916 user 0m1.838s 00:16:40.916 sys 0m0.488s 00:16:40.916 03:01:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.916 ************************************ 00:16:40.916 END TEST bdev_write_zeroes 00:16:40.916 ************************************ 00:16:40.916 03:01:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:40.916 03:01:11 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.917 03:01:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:40.917 03:01:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.917 03:01:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.917 ************************************ 00:16:40.917 START TEST bdev_json_nonenclosed 00:16:40.917 ************************************ 00:16:40.917 03:01:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.917 [2024-12-05 03:01:11.577150] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:40.917 [2024-12-05 03:01:11.577276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73194 ] 00:16:40.917 [2024-12-05 03:01:11.733737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.177 [2024-12-05 03:01:11.876454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.177 [2024-12-05 03:01:11.876567] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:41.177 [2024-12-05 03:01:11.876589] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:41.177 [2024-12-05 03:01:11.876600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:41.438 00:16:41.438 real 0m0.583s 00:16:41.438 user 0m0.357s 00:16:41.438 sys 0m0.120s 00:16:41.438 03:01:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.438 ************************************ 00:16:41.438 END TEST bdev_json_nonenclosed 00:16:41.438 ************************************ 00:16:41.438 03:01:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:41.438 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.438 03:01:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:41.438 03:01:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.438 03:01:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.438 ************************************ 00:16:41.438 START TEST bdev_json_nonarray 00:16:41.438 ************************************ 00:16:41.438 03:01:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.438 [2024-12-05 03:01:12.224925] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:41.438 [2024-12-05 03:01:12.225113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73214 ] 00:16:41.699 [2024-12-05 03:01:12.391686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.699 [2024-12-05 03:01:12.535087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.699 [2024-12-05 03:01:12.535211] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:41.699 [2024-12-05 03:01:12.535233] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:41.699 [2024-12-05 03:01:12.535245] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:41.959 00:16:41.959 real 0m0.597s 00:16:41.959 user 0m0.360s 00:16:41.959 sys 0m0.130s 00:16:41.959 03:01:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.959 ************************************ 00:16:41.959 END TEST bdev_json_nonarray 00:16:41.959 ************************************ 00:16:41.959 03:01:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:41.959 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:41.959 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:41.959 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:41.959 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:41.959 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:41.959 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:42.220 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:42.220 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:42.220 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:42.220 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:42.220 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:42.220 03:01:12 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:42.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:43.424 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.685 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.685 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.945 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:44.205 00:16:44.205 real 0m52.499s 00:16:44.205 user 1m22.717s 00:16:44.205 sys 0m30.578s 00:16:44.205 03:01:14 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.205 ************************************ 00:16:44.205 END TEST blockdev_xnvme 00:16:44.205 ************************************ 00:16:44.205 03:01:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.205 03:01:14 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:44.205 03:01:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.205 03:01:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.205 03:01:14 -- common/autotest_common.sh@10 -- # set +x 00:16:44.205 ************************************ 00:16:44.205 START TEST ublk 00:16:44.205 ************************************ 00:16:44.205 03:01:14 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:44.205 * Looking for test storage... 00:16:44.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:44.205 03:01:14 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:44.205 03:01:14 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:44.205 03:01:14 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:44.205 03:01:15 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.205 03:01:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.205 03:01:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.205 03:01:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.205 03:01:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.205 03:01:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.205 03:01:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:44.205 03:01:15 ublk -- scripts/common.sh@345 -- # : 1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.205 03:01:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.205 03:01:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@353 -- # local d=1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.205 03:01:15 ublk -- scripts/common.sh@355 -- # echo 1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.205 03:01:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@353 -- # local d=2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.205 03:01:15 ublk -- scripts/common.sh@355 -- # echo 2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.205 03:01:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.205 03:01:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.205 03:01:15 ublk -- scripts/common.sh@368 -- # return 0 00:16:44.205 03:01:15 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.206 03:01:15 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.206 --rc genhtml_branch_coverage=1 00:16:44.206 --rc genhtml_function_coverage=1 00:16:44.206 --rc genhtml_legend=1 00:16:44.206 --rc geninfo_all_blocks=1 00:16:44.206 --rc geninfo_unexecuted_blocks=1 00:16:44.206 00:16:44.206 ' 00:16:44.206 03:01:15 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.206 --rc genhtml_branch_coverage=1 00:16:44.206 --rc genhtml_function_coverage=1 00:16:44.206 --rc genhtml_legend=1 00:16:44.206 --rc geninfo_all_blocks=1 00:16:44.206 --rc geninfo_unexecuted_blocks=1 00:16:44.206 00:16:44.206 ' 00:16:44.206 03:01:15 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.206 --rc genhtml_branch_coverage=1 00:16:44.206 --rc genhtml_function_coverage=1 00:16:44.206 --rc genhtml_legend=1 00:16:44.206 --rc geninfo_all_blocks=1 00:16:44.206 --rc geninfo_unexecuted_blocks=1 00:16:44.206 00:16:44.206 ' 00:16:44.206 03:01:15 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.206 --rc genhtml_branch_coverage=1 00:16:44.206 --rc genhtml_function_coverage=1 00:16:44.206 --rc genhtml_legend=1 00:16:44.206 --rc geninfo_all_blocks=1 00:16:44.206 --rc geninfo_unexecuted_blocks=1 00:16:44.206 00:16:44.206 ' 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:44.206 03:01:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:44.206 03:01:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:44.206 03:01:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:44.206 03:01:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:44.206 03:01:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:44.206 03:01:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:44.206 03:01:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:44.206 03:01:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:44.206 03:01:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:44.467 03:01:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:44.467 03:01:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.467 03:01:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.467 03:01:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.467 ************************************ 00:16:44.467 START TEST test_save_ublk_config 00:16:44.467 ************************************ 00:16:44.467 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:44.467 03:01:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:44.467 03:01:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73513 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73513 00:16:44.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73513 ']' 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:44.468 03:01:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:44.468 [2024-12-05 03:01:15.154289] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:44.468 [2024-12-05 03:01:15.154437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73513 ] 00:16:44.728 [2024-12-05 03:01:15.316166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.728 [2024-12-05 03:01:15.446814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.668 [2024-12-05 03:01:16.167098] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:45.668 [2024-12-05 03:01:16.168003] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:45.668 malloc0 00:16:45.668 [2024-12-05 03:01:16.239208] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:45.668 [2024-12-05 03:01:16.239289] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:45.668 [2024-12-05 03:01:16.239299] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:45.668 [2024-12-05 03:01:16.239307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:45.668 [2024-12-05 03:01:16.248177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:45.668 [2024-12-05 03:01:16.248200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:45.668 [2024-12-05 03:01:16.255104] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:45.668 [2024-12-05 03:01:16.255205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:45.668 [2024-12-05 03:01:16.272100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:45.668 0 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.668 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.929 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.929 03:01:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:45.929 "subsystems": [ 00:16:45.929 { 00:16:45.929 "subsystem": "fsdev", 00:16:45.929 "config": [ 00:16:45.929 { 00:16:45.929 "method": "fsdev_set_opts", 00:16:45.929 "params": { 00:16:45.929 "fsdev_io_pool_size": 65535, 00:16:45.929 "fsdev_io_cache_size": 256 00:16:45.929 } 00:16:45.929 } 00:16:45.929 ] 00:16:45.929 }, 00:16:45.929 { 00:16:45.929 "subsystem": "keyring", 00:16:45.929 "config": [] 00:16:45.929 }, 00:16:45.929 { 00:16:45.929 "subsystem": "iobuf", 00:16:45.929 "config": [ 00:16:45.929 { 00:16:45.929 "method": "iobuf_set_options", 00:16:45.929 "params": { 00:16:45.929 "small_pool_count": 8192, 00:16:45.929 "large_pool_count": 1024, 00:16:45.929 "small_bufsize": 8192, 00:16:45.929 "large_bufsize": 135168, 00:16:45.929 "enable_numa": false 00:16:45.930 } 00:16:45.930 } 00:16:45.930 ] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "sock", 00:16:45.930 "config": [ 00:16:45.930 { 00:16:45.930 "method": "sock_set_default_impl", 00:16:45.930 "params": { 00:16:45.930 "impl_name": "posix" 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "sock_impl_set_options", 00:16:45.930 "params": { 00:16:45.930 "impl_name": "ssl", 00:16:45.930 "recv_buf_size": 4096, 00:16:45.930 "send_buf_size": 4096, 00:16:45.930 "enable_recv_pipe": true, 00:16:45.930 "enable_quickack": false, 00:16:45.930 "enable_placement_id": 0, 00:16:45.930 "enable_zerocopy_send_server": true, 00:16:45.930 "enable_zerocopy_send_client": false, 00:16:45.930 "zerocopy_threshold": 0, 00:16:45.930 "tls_version": 0, 00:16:45.930 "enable_ktls": false 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "sock_impl_set_options", 00:16:45.930 "params": { 00:16:45.930 "impl_name": "posix", 00:16:45.930 "recv_buf_size": 2097152, 00:16:45.930 "send_buf_size": 2097152, 00:16:45.930 "enable_recv_pipe": true, 00:16:45.930 "enable_quickack": false, 00:16:45.930 "enable_placement_id": 0, 00:16:45.930 "enable_zerocopy_send_server": true, 00:16:45.930 "enable_zerocopy_send_client": false, 00:16:45.930 "zerocopy_threshold": 0, 00:16:45.930 "tls_version": 0, 00:16:45.930 "enable_ktls": false 00:16:45.930 } 00:16:45.930 } 00:16:45.930 ] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "vmd", 00:16:45.930 "config": [] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "accel", 00:16:45.930 "config": [ 00:16:45.930 { 00:16:45.930 "method": "accel_set_options", 00:16:45.930 "params": { 00:16:45.930 "small_cache_size": 128, 00:16:45.930 "large_cache_size": 16, 00:16:45.930 "task_count": 2048, 00:16:45.930 "sequence_count": 2048, 00:16:45.930 "buf_count": 2048 00:16:45.930 } 00:16:45.930 } 00:16:45.930 ] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "bdev", 00:16:45.930 "config": [ 00:16:45.930 { 00:16:45.930 "method": "bdev_set_options", 00:16:45.930 "params": { 00:16:45.930 "bdev_io_pool_size": 65535, 00:16:45.930 "bdev_io_cache_size": 256, 00:16:45.930 "bdev_auto_examine": true, 00:16:45.930 "iobuf_small_cache_size": 128, 00:16:45.930 "iobuf_large_cache_size": 16 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "bdev_raid_set_options", 00:16:45.930 "params": { 00:16:45.930 "process_window_size_kb": 1024, 00:16:45.930 "process_max_bandwidth_mb_sec": 0 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "bdev_iscsi_set_options", 00:16:45.930 "params": { 00:16:45.930 "timeout_sec": 30 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "bdev_nvme_set_options", 00:16:45.930 "params": { 00:16:45.930 "action_on_timeout": "none", 00:16:45.930 "timeout_us": 0, 00:16:45.930 "timeout_admin_us": 0, 00:16:45.930 "keep_alive_timeout_ms": 10000, 00:16:45.930 "arbitration_burst": 0, 00:16:45.930 "low_priority_weight": 0, 00:16:45.930 "medium_priority_weight": 0, 00:16:45.930 "high_priority_weight": 0, 00:16:45.930 "nvme_adminq_poll_period_us": 10000, 00:16:45.930 "nvme_ioq_poll_period_us": 0, 00:16:45.930 "io_queue_requests": 0, 00:16:45.930 "delay_cmd_submit": true, 00:16:45.930 "transport_retry_count": 4, 00:16:45.930 "bdev_retry_count": 3, 00:16:45.930 "transport_ack_timeout": 0, 00:16:45.930 "ctrlr_loss_timeout_sec": 0, 00:16:45.930 "reconnect_delay_sec": 0, 00:16:45.930 "fast_io_fail_timeout_sec": 0, 00:16:45.930 "disable_auto_failback": false, 00:16:45.930 "generate_uuids": false, 00:16:45.930 "transport_tos": 0, 00:16:45.930 "nvme_error_stat": false, 00:16:45.930 "rdma_srq_size": 0, 00:16:45.930 "io_path_stat": false, 00:16:45.930 "allow_accel_sequence": false, 00:16:45.930 "rdma_max_cq_size": 0, 00:16:45.930 "rdma_cm_event_timeout_ms": 0, 00:16:45.930 "dhchap_digests": [ 00:16:45.930 "sha256", 00:16:45.930 "sha384", 00:16:45.930 "sha512" 00:16:45.930 ], 00:16:45.930 "dhchap_dhgroups": [ 00:16:45.930 "null", 00:16:45.930 "ffdhe2048", 00:16:45.930 "ffdhe3072", 00:16:45.930 "ffdhe4096", 00:16:45.930 "ffdhe6144", 00:16:45.930 "ffdhe8192" 00:16:45.930 ] 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "bdev_nvme_set_hotplug", 00:16:45.930 "params": { 00:16:45.930 "period_us": 100000, 00:16:45.930 "enable": false 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "bdev_malloc_create", 00:16:45.930 "params": { 00:16:45.930 "name": "malloc0", 00:16:45.930 "num_blocks": 8192, 00:16:45.930 "block_size": 4096, 00:16:45.930 "physical_block_size": 4096, 00:16:45.930 "uuid": "c0f8f55a-3031-476b-8be1-0431308383ad", 00:16:45.930 "optimal_io_boundary": 0, 00:16:45.930 "md_size": 0, 00:16:45.930 "dif_type": 0, 00:16:45.930 "dif_is_head_of_md": false, 00:16:45.930 "dif_pi_format": 0 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "bdev_wait_for_examine" 00:16:45.930 } 00:16:45.930 ] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "scsi", 00:16:45.930 "config": null 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "scheduler", 00:16:45.930 "config": [ 00:16:45.930 { 00:16:45.930 "method": "framework_set_scheduler", 00:16:45.930 "params": { 00:16:45.930 "name": "static" 00:16:45.930 } 00:16:45.930 } 00:16:45.930 ] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "vhost_scsi", 00:16:45.930 "config": [] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "vhost_blk", 00:16:45.930 "config": [] 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "subsystem": "ublk", 00:16:45.930 "config": [ 00:16:45.930 { 00:16:45.930 "method": "ublk_create_target", 00:16:45.930 "params": { 00:16:45.930 "cpumask": "1" 00:16:45.930 } 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "method": "ublk_start_disk", 00:16:45.930 "params": { 00:16:45.931 "bdev_name": "malloc0", 00:16:45.931 "ublk_id": 0, 00:16:45.931 "num_queues": 1, 00:16:45.931 "queue_depth": 128 00:16:45.931 } 00:16:45.931 } 00:16:45.931 ] 00:16:45.931 }, 00:16:45.931 { 00:16:45.931 "subsystem": "nbd", 00:16:45.931 "config": [] 00:16:45.931 }, 00:16:45.931 { 00:16:45.931 "subsystem": "nvmf", 00:16:45.931 "config": [ 00:16:45.931 { 00:16:45.931 "method": "nvmf_set_config", 00:16:45.931 "params": { 00:16:45.931 "discovery_filter": "match_any", 00:16:45.931 "admin_cmd_passthru": { 00:16:45.931 "identify_ctrlr": false 00:16:45.931 }, 00:16:45.931 "dhchap_digests": [ 00:16:45.931 "sha256", 00:16:45.931 "sha384", 00:16:45.931 "sha512" 00:16:45.931 ], 00:16:45.931 "dhchap_dhgroups": [ 00:16:45.931 "null", 00:16:45.931 "ffdhe2048", 00:16:45.931 "ffdhe3072", 00:16:45.931 "ffdhe4096", 00:16:45.931 "ffdhe6144", 00:16:45.931 "ffdhe8192" 00:16:45.931 ] 00:16:45.931 } 00:16:45.931 }, 00:16:45.931 { 00:16:45.931 "method": "nvmf_set_max_subsystems", 00:16:45.931 "params": { 00:16:45.931 "max_subsystems": 1024 00:16:45.931 } 00:16:45.931 }, 00:16:45.931 { 00:16:45.931 "method": "nvmf_set_crdt", 00:16:45.931 "params": { 00:16:45.931 "crdt1": 0, 00:16:45.931 "crdt2": 0, 00:16:45.931 "crdt3": 0 00:16:45.931 } 00:16:45.931 } 00:16:45.931 ] 00:16:45.931 }, 00:16:45.931 { 00:16:45.931 "subsystem": "iscsi", 00:16:45.931 "config": [ 00:16:45.931 { 00:16:45.931 "method": "iscsi_set_options", 00:16:45.931 "params": { 00:16:45.931 "node_base": "iqn.2016-06.io.spdk", 00:16:45.931 "max_sessions": 128, 00:16:45.931 "max_connections_per_session": 2, 00:16:45.931 "max_queue_depth": 64, 00:16:45.931 "default_time2wait": 2, 00:16:45.931 "default_time2retain": 20, 00:16:45.931 "first_burst_length": 8192, 00:16:45.931 "immediate_data": true, 00:16:45.931 "allow_duplicated_isid": false, 00:16:45.931 "error_recovery_level": 0, 00:16:45.931 "nop_timeout": 60, 00:16:45.931 "nop_in_interval": 30, 00:16:45.931 "disable_chap": false, 00:16:45.931 "require_chap": false, 00:16:45.931 "mutual_chap": false, 00:16:45.931 "chap_group": 0, 00:16:45.931 "max_large_datain_per_connection": 64, 00:16:45.931 "max_r2t_per_connection": 4, 00:16:45.931 "pdu_pool_size": 36864, 00:16:45.931 "immediate_data_pool_size": 16384, 00:16:45.931 "data_out_pool_size": 2048 00:16:45.931 } 00:16:45.931 } 00:16:45.931 ] 00:16:45.931 } 00:16:45.931 ] 00:16:45.931 }' 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73513 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73513 ']' 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73513 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73513 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73513' 00:16:45.931 killing process with pid 73513 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73513 00:16:45.931 03:01:16 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73513 00:16:47.319 [2024-12-05 03:01:17.758999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.319 [2024-12-05 03:01:17.798240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.319 [2024-12-05 03:01:17.798398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.319 [2024-12-05 03:01:17.810105] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.319 [2024-12-05 03:01:17.810177] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:47.319 [2024-12-05 03:01:17.810193] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:47.319 [2024-12-05 03:01:17.810246] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.319 [2024-12-05 03:01:17.810422] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73572 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73572 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73572 ']' 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.706 03:01:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:48.706 "subsystems": [ 00:16:48.706 { 00:16:48.706 "subsystem": "fsdev", 00:16:48.706 "config": [ 00:16:48.706 { 00:16:48.706 "method": "fsdev_set_opts", 00:16:48.706 "params": { 00:16:48.706 "fsdev_io_pool_size": 65535, 00:16:48.706 "fsdev_io_cache_size": 256 00:16:48.707 } 00:16:48.707 } 00:16:48.707 ] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "keyring", 00:16:48.707 "config": [] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "iobuf", 00:16:48.707 "config": [ 00:16:48.707 { 00:16:48.707 "method": "iobuf_set_options", 00:16:48.707 "params": { 00:16:48.707 "small_pool_count": 8192, 00:16:48.707 "large_pool_count": 1024, 00:16:48.707 "small_bufsize": 8192, 00:16:48.707 "large_bufsize": 135168, 00:16:48.707 "enable_numa": false 00:16:48.707 } 00:16:48.707 } 00:16:48.707 ] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "sock", 00:16:48.707 "config": [ 00:16:48.707 { 00:16:48.707 "method": "sock_set_default_impl", 00:16:48.707 "params": { 00:16:48.707 "impl_name": "posix" 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "sock_impl_set_options", 00:16:48.707 "params": { 00:16:48.707 "impl_name": "ssl", 00:16:48.707 "recv_buf_size": 4096, 00:16:48.707 "send_buf_size": 4096, 00:16:48.707 "enable_recv_pipe": true, 00:16:48.707 "enable_quickack": false, 00:16:48.707 "enable_placement_id": 0, 00:16:48.707 "enable_zerocopy_send_server": true, 00:16:48.707 "enable_zerocopy_send_client": false, 00:16:48.707 "zerocopy_threshold": 0, 00:16:48.707 "tls_version": 0, 00:16:48.707 "enable_ktls": false 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "sock_impl_set_options", 00:16:48.707 "params": { 00:16:48.707 "impl_name": "posix", 00:16:48.707 "recv_buf_size": 2097152, 00:16:48.707 "send_buf_size": 2097152, 00:16:48.707 "enable_recv_pipe": true, 00:16:48.707 "enable_quickack": false, 00:16:48.707 "enable_placement_id": 0, 00:16:48.707 "enable_zerocopy_send_server": true, 00:16:48.707 "enable_zerocopy_send_client": false, 00:16:48.707 "zerocopy_threshold": 0, 00:16:48.707 "tls_version": 0, 00:16:48.707 "enable_ktls": false 00:16:48.707 } 00:16:48.707 } 00:16:48.707 ] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "vmd", 00:16:48.707 "config": [] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "accel", 00:16:48.707 "config": [ 00:16:48.707 { 00:16:48.707 "method": "accel_set_options", 00:16:48.707 "params": { 00:16:48.707 "small_cache_size": 128, 00:16:48.707 "large_cache_size": 16, 00:16:48.707 "task_count": 2048, 00:16:48.707 "sequence_count": 2048, 00:16:48.707 "buf_count": 2048 00:16:48.707 } 00:16:48.707 } 00:16:48.707 ] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "bdev", 00:16:48.707 "config": [ 00:16:48.707 { 00:16:48.707 "method": "bdev_set_options", 00:16:48.707 "params": { 00:16:48.707 "bdev_io_pool_size": 65535, 00:16:48.707 "bdev_io_cache_size": 256, 00:16:48.707 "bdev_auto_examine": true, 00:16:48.707 "iobuf_small_cache_size": 128, 00:16:48.707 "iobuf_large_cache_size": 16 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "bdev_raid_set_options", 00:16:48.707 "params": { 00:16:48.707 "process_window_size_kb": 1024, 00:16:48.707 "process_max_bandwidth_mb_sec": 0 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "bdev_iscsi_set_options", 00:16:48.707 "params": { 00:16:48.707 "timeout_sec": 30 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "bdev_nvme_set_options", 00:16:48.707 "params": { 00:16:48.707 "action_on_timeout": "none", 00:16:48.707 "timeout_us": 0, 00:16:48.707 "timeout_admin_us": 0, 00:16:48.707 "keep_alive_timeout_ms": 10000, 00:16:48.707 "arbitration_burst": 0, 00:16:48.707 "low_priority_weight": 0, 00:16:48.707 "medium_priority_weight": 0, 00:16:48.707 "high_priority_weight": 0, 00:16:48.707 "nvme_adminq_poll_period_us": 10000, 00:16:48.707 "nvme_ioq_poll_period_us": 0, 00:16:48.707 "io_queue_requests": 0, 00:16:48.707 "delay_cmd_submit": true, 00:16:48.707 "transport_retry_count": 4, 00:16:48.707 "bdev_retry_count": 3, 00:16:48.707 "transport_ack_timeout": 0, 00:16:48.707 "ctrlr_loss_timeout_sec": 0, 00:16:48.707 "reconnect_delay_sec": 0, 00:16:48.707 "fast_io_fail_timeout_sec": 0, 00:16:48.707 "disable_auto_failback": false, 00:16:48.707 "generate_uuids": false, 00:16:48.707 "transport_tos": 0, 00:16:48.707 "nvme_error_stat": false, 00:16:48.707 "rdma_srq_size": 0, 00:16:48.707 "io_path_stat": false, 00:16:48.707 "allow_accel_sequence": false, 00:16:48.707 "rdma_max_cq_size": 0, 00:16:48.707 "rdma_cm_event_timeout_ms": 0, 00:16:48.707 "dhchap_digests": [ 00:16:48.707 "sha256", 00:16:48.707 "sha384", 00:16:48.707 "sha512" 00:16:48.707 ], 00:16:48.707 "dhchap_dhgroups": [ 00:16:48.707 "null", 00:16:48.707 "ffdhe2048", 00:16:48.707 "ffdhe3072", 00:16:48.707 "ffdhe4096", 00:16:48.707 "ffdhe6144", 00:16:48.707 "ffdhe8192" 00:16:48.707 ] 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "bdev_nvme_set_hotplug", 00:16:48.707 "params": { 00:16:48.707 "period_us": 100000, 00:16:48.707 "enable": false 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "bdev_malloc_create", 00:16:48.707 "params": { 00:16:48.707 "name": "malloc0", 00:16:48.707 "num_blocks": 8192, 00:16:48.707 "block_size": 4096, 00:16:48.707 "physical_block_size": 4096, 00:16:48.707 "uuid": "c0f8f55a-3031-476b-8be1-0431308383ad", 00:16:48.707 "optimal_io_boundary": 0, 00:16:48.707 "md_size": 0, 00:16:48.707 "dif_type": 0, 00:16:48.707 "dif_is_head_of_md": false, 00:16:48.707 "dif_pi_format": 0 00:16:48.707 } 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "method": "bdev_wait_for_examine" 00:16:48.707 } 00:16:48.707 ] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "scsi", 00:16:48.707 "config": null 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "scheduler", 00:16:48.707 "config": [ 00:16:48.707 { 00:16:48.707 "method": "framework_set_scheduler", 00:16:48.707 "params": { 00:16:48.707 "name": "static" 00:16:48.707 } 00:16:48.707 } 00:16:48.707 ] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "vhost_scsi", 00:16:48.707 "config": [] 00:16:48.707 }, 00:16:48.707 { 00:16:48.707 "subsystem": "vhost_blk", 00:16:48.707 "config": [] 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "subsystem": "ublk", 00:16:48.708 "config": [ 00:16:48.708 { 00:16:48.708 "method": "ublk_create_target", 00:16:48.708 "params": { 00:16:48.708 "cpumask": "1" 00:16:48.708 } 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "method": "ublk_start_disk", 00:16:48.708 "params": { 00:16:48.708 "bdev_name": "malloc0", 00:16:48.708 "ublk_id": 0, 00:16:48.708 "num_queues": 1, 00:16:48.708 "queue_depth": 128 00:16:48.708 } 00:16:48.708 } 00:16:48.708 ] 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "subsystem": "nbd", 00:16:48.708 "config": [] 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "subsystem": "nvmf", 00:16:48.708 "config": [ 00:16:48.708 { 00:16:48.708 "method": "nvmf_set_config", 00:16:48.708 "params": { 00:16:48.708 "discovery_filter": "match_any", 00:16:48.708 "admin_cmd_passthru": { 00:16:48.708 "identify_ctrlr": false 00:16:48.708 }, 00:16:48.708 "dhchap_digests": [ 00:16:48.708 "sha256", 00:16:48.708 "sha384", 00:16:48.708 "sha512" 00:16:48.708 ], 00:16:48.708 "dhchap_dhgroups": [ 00:16:48.708 "null", 00:16:48.708 "ffdhe2048", 00:16:48.708 "ffdhe3072", 00:16:48.708 "ffdhe4096", 00:16:48.708 "ffdhe6144", 00:16:48.708 "ffdhe8192" 00:16:48.708 ] 00:16:48.708 } 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "method": "nvmf_set_max_subsystems", 00:16:48.708 "params": { 00:16:48.708 "max_subsystems": 1024 00:16:48.708 } 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "method": "nvmf_set_crdt", 00:16:48.708 "params": { 00:16:48.708 "crdt1": 0, 00:16:48.708 "crdt2": 0, 00:16:48.708 "crdt3": 0 00:16:48.708 } 00:16:48.708 } 00:16:48.708 ] 00:16:48.708 }, 00:16:48.708 { 00:16:48.708 "subsystem": "iscsi", 00:16:48.708 "config": [ 00:16:48.708 { 00:16:48.708 "method": "iscsi_set_options", 00:16:48.708 "params": { 00:16:48.708 "node_base": "iqn.2016-06.io.spdk", 00:16:48.708 "max_sessions": 128, 00:16:48.708 "max_connections_per_session": 2, 00:16:48.708 "max_queue_depth": 64, 00:16:48.708 "default_time2wait": 2, 00:16:48.708 "default_time2retain": 20, 00:16:48.708 "first_burst_length": 8192, 00:16:48.708 "immediate_data": true, 00:16:48.708 "allow_duplicated_isid": false, 00:16:48.708 "error_recovery_level": 0, 00:16:48.708 "nop_timeout": 60, 00:16:48.708 "nop_in_interval": 30, 00:16:48.708 "disable_chap": false, 00:16:48.708 "require_chap": false, 00:16:48.708 "mutual_chap": false, 00:16:48.708 "chap_group": 0, 00:16:48.708 "max_large_datain_per_connection": 64, 00:16:48.708 "max_r2t_per_connection": 4, 00:16:48.708 "pdu_pool_size": 36864, 00:16:48.708 "immediate_data_pool_size": 16384, 00:16:48.708 "data_out_pool_size": 2048 00:16:48.708 } 00:16:48.708 } 00:16:48.708 ] 00:16:48.708 } 00:16:48.708 ] 00:16:48.708 }' 00:16:48.708 03:01:19 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.708 03:01:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:48.708 [2024-12-05 03:01:19.336925] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:48.708 [2024-12-05 03:01:19.337422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73572 ] 00:16:48.708 [2024-12-05 03:01:19.497290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.969 [2024-12-05 03:01:19.620470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.914 [2024-12-05 03:01:20.590098] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:49.914 [2024-12-05 03:01:20.591123] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:49.914 [2024-12-05 03:01:20.598248] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:49.914 [2024-12-05 03:01:20.598349] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:49.914 [2024-12-05 03:01:20.598361] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:49.914 [2024-12-05 03:01:20.598370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:49.914 [2024-12-05 03:01:20.606129] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:49.914 [2024-12-05 03:01:20.606163] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:49.914 [2024-12-05 03:01:20.614112] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:49.914 [2024-12-05 03:01:20.614246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:49.914 [2024-12-05 03:01:20.631107] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73572 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73572 ']' 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73572 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73572 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.914 killing process with pid 73572 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73572' 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73572 00:16:49.914 03:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73572 00:16:51.297 [2024-12-05 03:01:21.916097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:51.298 [2024-12-05 03:01:21.945101] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:51.298 [2024-12-05 03:01:21.945207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:51.298 [2024-12-05 03:01:21.951086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:51.298 [2024-12-05 03:01:21.951138] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:51.298 [2024-12-05 03:01:21.951146] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:51.298 [2024-12-05 03:01:21.951167] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:51.298 [2024-12-05 03:01:21.951284] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:52.674 03:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:52.674 ************************************ 00:16:52.674 END TEST test_save_ublk_config 00:16:52.674 ************************************ 00:16:52.674 00:16:52.674 real 0m8.124s 00:16:52.674 user 0m5.607s 00:16:52.674 sys 0m3.099s 00:16:52.674 03:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.674 03:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.674 03:01:23 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73647 00:16:52.674 03:01:23 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.674 03:01:23 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73647 00:16:52.674 03:01:23 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:52.674 03:01:23 ublk -- common/autotest_common.sh@835 -- # '[' -z 73647 ']' 00:16:52.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.674 03:01:23 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.674 03:01:23 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.674 03:01:23 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.674 03:01:23 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.674 03:01:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.674 [2024-12-05 03:01:23.302122] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:52.674 [2024-12-05 03:01:23.302214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73647 ] 00:16:52.674 [2024-12-05 03:01:23.453975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:52.933 [2024-12-05 03:01:23.545956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.933 [2024-12-05 03:01:23.546061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.500 03:01:24 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.500 03:01:24 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:53.500 03:01:24 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:53.500 03:01:24 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:53.500 03:01:24 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.500 03:01:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 ************************************ 00:16:53.500 START TEST test_create_ublk 00:16:53.500 ************************************ 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:53.500 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 [2024-12-05 03:01:24.160092] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:53.500 [2024-12-05 03:01:24.161831] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.500 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:53.500 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.500 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:53.500 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.500 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 [2024-12-05 03:01:24.342208] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:53.500 [2024-12-05 03:01:24.342547] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:53.500 [2024-12-05 03:01:24.342556] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:53.500 [2024-12-05 03:01:24.342562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:53.759 [2024-12-05 03:01:24.350108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:53.759 [2024-12-05 03:01:24.350124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:53.759 [2024-12-05 03:01:24.358100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:53.759 [2024-12-05 03:01:24.358634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:53.759 [2024-12-05 03:01:24.389108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:53.759 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.759 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:53.759 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:53.759 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:53.759 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.759 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.759 03:01:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.759 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:53.759 { 00:16:53.759 "ublk_device": "/dev/ublkb0", 00:16:53.759 "id": 0, 00:16:53.759 "queue_depth": 512, 00:16:53.759 "num_queues": 4, 00:16:53.759 "bdev_name": "Malloc0" 00:16:53.759 } 00:16:53.759 ]' 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:53.760 03:01:24 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:53.760 03:01:24 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:54.018 fio: verification read phase will never start because write phase uses all of runtime 00:16:54.018 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:54.018 fio-3.35 00:16:54.018 Starting 1 process 00:17:03.980 00:17:03.980 fio_test: (groupid=0, jobs=1): err= 0: pid=73696: Thu Dec 5 03:01:34 2024 00:17:03.980 write: IOPS=14.4k, BW=56.4MiB/s (59.2MB/s)(564MiB/10001msec); 0 zone resets 00:17:03.980 clat (usec): min=41, max=4066, avg=68.46, stdev=92.23 00:17:03.980 lat (usec): min=41, max=4067, avg=68.90, stdev=92.24 00:17:03.980 clat percentiles (usec): 00:17:03.980 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 60], 00:17:03.980 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67], 00:17:03.980 | 70.00th=[ 69], 80.00th=[ 70], 90.00th=[ 74], 95.00th=[ 78], 00:17:03.980 | 99.00th=[ 87], 99.50th=[ 93], 99.90th=[ 1811], 99.95th=[ 2802], 00:17:03.980 | 99.99th=[ 3556] 00:17:03.980 bw ( KiB/s): min=55392, max=62968, per=100.00%, avg=57848.00, stdev=1772.10, samples=19 00:17:03.980 iops : min=13848, max=15742, avg=14462.00, stdev=443.02, samples=19 00:17:03.980 lat (usec) : 50=2.59%, 100=97.03%, 250=0.18%, 500=0.03%, 750=0.01% 00:17:03.980 lat (usec) : 1000=0.02% 00:17:03.980 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:17:03.980 cpu : usr=2.12%, sys=13.45%, ctx=144465, majf=0, minf=796 00:17:03.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.980 issued rwts: total=0,144463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.980 00:17:03.980 Run status group 0 (all jobs): 00:17:03.980 WRITE: bw=56.4MiB/s (59.2MB/s), 56.4MiB/s-56.4MiB/s (59.2MB/s-59.2MB/s), io=564MiB (592MB), run=10001-10001msec 00:17:03.980 00:17:03.980 Disk stats (read/write): 00:17:03.980 ublkb0: ios=0/143018, merge=0/0, ticks=0/8193, in_queue=8193, util=99.09% 00:17:03.980 03:01:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:03.980 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.980 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.980 [2024-12-05 03:01:34.811779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:04.238 [2024-12-05 03:01:34.841656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.238 [2024-12-05 03:01:34.842556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.238 [2024-12-05 03:01:34.849102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.238 [2024-12-05 03:01:34.849341] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:04.238 [2024-12-05 03:01:34.849356] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.238 03:01:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.238 [2024-12-05 03:01:34.865152] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:04.238 request: 00:17:04.238 { 00:17:04.238 "ublk_id": 0, 00:17:04.238 "method": "ublk_stop_disk", 00:17:04.238 "req_id": 1 00:17:04.238 } 00:17:04.238 Got JSON-RPC error response 00:17:04.238 response: 00:17:04.238 { 00:17:04.238 "code": -19, 00:17:04.238 "message": "No such device" 00:17:04.238 } 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.238 03:01:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.238 [2024-12-05 03:01:34.881148] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:04.238 [2024-12-05 03:01:34.889086] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:04.238 [2024-12-05 03:01:34.889117] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.238 03:01:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.238 03:01:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.504 03:01:35 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.504 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:04.504 03:01:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:04.769 ************************************ 00:17:04.769 END TEST test_create_ublk 00:17:04.769 ************************************ 00:17:04.769 03:01:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:04.769 00:17:04.769 real 0m11.208s 00:17:04.769 user 0m0.522s 00:17:04.769 sys 0m1.417s 00:17:04.769 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.769 03:01:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.769 03:01:35 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:04.769 03:01:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:04.769 03:01:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.769 03:01:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.769 ************************************ 00:17:04.769 START TEST test_create_multi_ublk 00:17:04.769 ************************************ 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.769 [2024-12-05 03:01:35.405088] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:04.769 [2024-12-05 03:01:35.406680] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.769 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.026 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.026 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.027 [2024-12-05 03:01:35.642211] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:05.027 [2024-12-05 03:01:35.642552] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:05.027 [2024-12-05 03:01:35.642565] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:05.027 [2024-12-05 03:01:35.642573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.027 [2024-12-05 03:01:35.654125] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.027 [2024-12-05 03:01:35.654147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.027 [2024-12-05 03:01:35.666096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.027 [2024-12-05 03:01:35.666621] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:05.027 [2024-12-05 03:01:35.679214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.027 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.284 [2024-12-05 03:01:35.909199] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:05.284 [2024-12-05 03:01:35.909525] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:05.284 [2024-12-05 03:01:35.909538] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:05.284 [2024-12-05 03:01:35.909543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.284 [2024-12-05 03:01:35.918294] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.284 [2024-12-05 03:01:35.918311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.284 [2024-12-05 03:01:35.925101] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.284 [2024-12-05 03:01:35.925625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:05.284 [2024-12-05 03:01:35.934114] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.284 03:01:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.284 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.284 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:05.284 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:05.284 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.284 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.284 [2024-12-05 03:01:36.109183] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:05.284 [2024-12-05 03:01:36.109515] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:05.284 [2024-12-05 03:01:36.109527] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:05.284 [2024-12-05 03:01:36.109533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.284 [2024-12-05 03:01:36.117108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.284 [2024-12-05 03:01:36.117129] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.284 [2024-12-05 03:01:36.125095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.284 [2024-12-05 03:01:36.125617] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:05.541 [2024-12-05 03:01:36.130942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.541 [2024-12-05 03:01:36.305203] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:05.541 [2024-12-05 03:01:36.305529] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:05.541 [2024-12-05 03:01:36.305542] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:05.541 [2024-12-05 03:01:36.305547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.541 [2024-12-05 03:01:36.313105] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.541 [2024-12-05 03:01:36.313121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.541 [2024-12-05 03:01:36.321100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.541 [2024-12-05 03:01:36.321616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:05.541 [2024-12-05 03:01:36.338102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:05.541 { 00:17:05.541 "ublk_device": "/dev/ublkb0", 00:17:05.541 "id": 0, 00:17:05.541 "queue_depth": 512, 00:17:05.541 "num_queues": 4, 00:17:05.541 "bdev_name": "Malloc0" 00:17:05.541 }, 00:17:05.541 { 00:17:05.541 "ublk_device": "/dev/ublkb1", 00:17:05.541 "id": 1, 00:17:05.541 "queue_depth": 512, 00:17:05.541 "num_queues": 4, 00:17:05.541 "bdev_name": "Malloc1" 00:17:05.541 }, 00:17:05.541 { 00:17:05.541 "ublk_device": "/dev/ublkb2", 00:17:05.541 "id": 2, 00:17:05.541 "queue_depth": 512, 00:17:05.541 "num_queues": 4, 00:17:05.541 "bdev_name": "Malloc2" 00:17:05.541 }, 00:17:05.541 { 00:17:05.541 "ublk_device": "/dev/ublkb3", 00:17:05.541 "id": 3, 00:17:05.541 "queue_depth": 512, 00:17:05.541 "num_queues": 4, 00:17:05.541 "bdev_name": "Malloc3" 00:17:05.541 } 00:17:05.541 ]' 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.541 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:05.799 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:06.056 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:06.342 03:01:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.342 [2024-12-05 03:01:37.009167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.342 [2024-12-05 03:01:37.041135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.342 [2024-12-05 03:01:37.041935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.342 [2024-12-05 03:01:37.049096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.342 [2024-12-05 03:01:37.049331] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:06.342 [2024-12-05 03:01:37.049345] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.342 [2024-12-05 03:01:37.065169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.342 [2024-12-05 03:01:37.105140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.342 [2024-12-05 03:01:37.105884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.342 [2024-12-05 03:01:37.113098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.342 [2024-12-05 03:01:37.113335] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:06.342 [2024-12-05 03:01:37.113349] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.342 [2024-12-05 03:01:37.129160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.342 [2024-12-05 03:01:37.161131] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.342 [2024-12-05 03:01:37.161833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.342 [2024-12-05 03:01:37.170138] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.342 [2024-12-05 03:01:37.170375] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:06.342 [2024-12-05 03:01:37.170387] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.342 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.601 [2024-12-05 03:01:37.185159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:06.601 [2024-12-05 03:01:37.217128] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:06.601 [2024-12-05 03:01:37.217750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:06.601 [2024-12-05 03:01:37.226123] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:06.601 [2024-12-05 03:01:37.226349] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:06.601 [2024-12-05 03:01:37.226361] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:06.601 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.601 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:06.601 [2024-12-05 03:01:37.425138] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:06.601 [2024-12-05 03:01:37.433086] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:06.601 [2024-12-05 03:01:37.433112] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:06.908 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:06.908 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:06.908 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:06.908 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.908 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.191 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.191 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.191 03:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:07.191 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.191 03:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.449 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.449 03:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.449 03:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:07.449 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.449 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.708 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.708 03:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.708 03:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:07.708 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.708 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:07.967 ************************************ 00:17:07.967 END TEST test_create_multi_ublk 00:17:07.967 ************************************ 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:07.967 00:17:07.967 real 0m3.288s 00:17:07.967 user 0m0.838s 00:17:07.967 sys 0m0.134s 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.967 03:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.967 03:01:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:07.967 03:01:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:07.967 03:01:38 ublk -- ublk/ublk.sh@130 -- # killprocess 73647 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@954 -- # '[' -z 73647 ']' 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@958 -- # kill -0 73647 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@959 -- # uname 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73647 00:17:07.967 killing process with pid 73647 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73647' 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@973 -- # kill 73647 00:17:07.967 03:01:38 ublk -- common/autotest_common.sh@978 -- # wait 73647 00:17:08.534 [2024-12-05 03:01:39.302146] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:08.534 [2024-12-05 03:01:39.302363] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:09.473 00:17:09.473 real 0m25.123s 00:17:09.473 user 0m35.438s 00:17:09.473 sys 0m9.631s 00:17:09.473 03:01:40 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.473 ************************************ 00:17:09.473 03:01:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:09.473 END TEST ublk 00:17:09.473 ************************************ 00:17:09.473 03:01:40 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:09.473 03:01:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.473 03:01:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.473 03:01:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.473 ************************************ 00:17:09.473 START TEST ublk_recovery 00:17:09.473 ************************************ 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:09.473 * Looking for test storage... 00:17:09.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.473 03:01:40 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.473 --rc genhtml_branch_coverage=1 00:17:09.473 --rc genhtml_function_coverage=1 00:17:09.473 --rc genhtml_legend=1 00:17:09.473 --rc geninfo_all_blocks=1 00:17:09.473 --rc geninfo_unexecuted_blocks=1 00:17:09.473 00:17:09.473 ' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.473 --rc genhtml_branch_coverage=1 00:17:09.473 --rc genhtml_function_coverage=1 00:17:09.473 --rc genhtml_legend=1 00:17:09.473 --rc geninfo_all_blocks=1 00:17:09.473 --rc geninfo_unexecuted_blocks=1 00:17:09.473 00:17:09.473 ' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.473 --rc genhtml_branch_coverage=1 00:17:09.473 --rc genhtml_function_coverage=1 00:17:09.473 --rc genhtml_legend=1 00:17:09.473 --rc geninfo_all_blocks=1 00:17:09.473 --rc geninfo_unexecuted_blocks=1 00:17:09.473 00:17:09.473 ' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.473 --rc genhtml_branch_coverage=1 00:17:09.473 --rc genhtml_function_coverage=1 00:17:09.473 --rc genhtml_legend=1 00:17:09.473 --rc geninfo_all_blocks=1 00:17:09.473 --rc geninfo_unexecuted_blocks=1 00:17:09.473 00:17:09.473 ' 00:17:09.473 03:01:40 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:09.473 03:01:40 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:09.473 03:01:40 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:09.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.473 03:01:40 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74048 00:17:09.473 03:01:40 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.473 03:01:40 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74048 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74048 ']' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.473 03:01:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.473 03:01:40 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:09.473 [2024-12-05 03:01:40.295927] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:09.473 [2024-12-05 03:01:40.296486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74048 ] 00:17:09.732 [2024-12-05 03:01:40.451127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.732 [2024-12-05 03:01:40.543191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.733 [2024-12-05 03:01:40.543270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:10.299 03:01:41 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.299 [2024-12-05 03:01:41.133090] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:10.299 [2024-12-05 03:01:41.134765] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.299 03:01:41 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.299 03:01:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.558 malloc0 00:17:10.558 03:01:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.558 03:01:41 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:10.558 03:01:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.558 03:01:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.558 [2024-12-05 03:01:41.221271] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:10.558 [2024-12-05 03:01:41.221356] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:10.558 [2024-12-05 03:01:41.221365] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:10.558 [2024-12-05 03:01:41.221372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:10.558 [2024-12-05 03:01:41.230194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:10.558 [2024-12-05 03:01:41.230212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:10.558 [2024-12-05 03:01:41.237097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:10.558 [2024-12-05 03:01:41.237224] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:10.558 [2024-12-05 03:01:41.252106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:10.558 1 00:17:10.558 03:01:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.558 03:01:41 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:11.494 03:01:42 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74079 00:17:11.494 03:01:42 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:11.494 03:01:42 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:11.753 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.753 fio-3.35 00:17:11.753 Starting 1 process 00:17:17.024 03:01:47 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74048 00:17:17.024 03:01:47 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:22.309 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74048 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:22.309 03:01:52 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:22.309 03:01:52 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74191 00:17:22.309 03:01:52 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.309 03:01:52 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74191 00:17:22.309 03:01:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74191 ']' 00:17:22.309 03:01:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.309 03:01:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.309 03:01:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.309 03:01:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.309 03:01:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.309 [2024-12-05 03:01:52.357685] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:22.309 [2024-12-05 03:01:52.357821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:17:22.309 [2024-12-05 03:01:52.523491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.309 [2024-12-05 03:01:52.640771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.309 [2024-12-05 03:01:52.640839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:22.572 03:01:53 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.572 [2024-12-05 03:01:53.277102] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:22.572 [2024-12-05 03:01:53.279119] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.572 03:01:53 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.572 malloc0 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.572 03:01:53 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.572 [2024-12-05 03:01:53.390228] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:22.572 [2024-12-05 03:01:53.390269] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:22.572 [2024-12-05 03:01:53.390281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:22.572 1 00:17:22.572 [2024-12-05 03:01:53.397134] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:22.572 [2024-12-05 03:01:53.397158] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:22.572 03:01:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.572 03:01:53 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74079 00:17:23.957 [2024-12-05 03:01:54.397194] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:23.957 [2024-12-05 03:01:54.407104] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:23.957 [2024-12-05 03:01:54.407126] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:24.891 [2024-12-05 03:01:55.410098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:24.892 [2024-12-05 03:01:55.416082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:24.892 [2024-12-05 03:01:55.416099] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:25.859 [2024-12-05 03:01:56.416119] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:25.859 [2024-12-05 03:01:56.420091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:25.859 [2024-12-05 03:01:56.420104] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:25.859 [2024-12-05 03:01:56.420112] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:25.859 [2024-12-05 03:01:56.420182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:47.783 [2024-12-05 03:02:17.591104] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:47.783 [2024-12-05 03:02:17.594492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:47.783 [2024-12-05 03:02:17.600344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:47.783 [2024-12-05 03:02:17.600363] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:14.329 00:18:14.329 fio_test: (groupid=0, jobs=1): err= 0: pid=74086: Thu Dec 5 03:02:42 2024 00:18:14.329 read: IOPS=13.4k, BW=52.2MiB/s (54.7MB/s)(3130MiB/60002msec) 00:18:14.329 slat (nsec): min=1133, max=199291, avg=5576.72, stdev=1549.69 00:18:14.329 clat (usec): min=674, max=30344k, avg=4933.63, stdev=281558.42 00:18:14.329 lat (usec): min=680, max=30344k, avg=4939.21, stdev=281558.42 00:18:14.329 clat percentiles (usec): 00:18:14.329 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2073], 20.00th=[ 2089], 00:18:14.329 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2147], 00:18:14.329 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3621], 00:18:14.329 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 8291], 99.95th=[12256], 00:18:14.329 | 99.99th=[13304] 00:18:14.329 bw ( KiB/s): min=41296, max=114672, per=100.00%, avg=106866.03, stdev=16506.59, samples=59 00:18:14.329 iops : min=10324, max=28668, avg=26716.51, stdev=4126.65, samples=59 00:18:14.329 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(3125MiB/60002msec); 0 zone resets 00:18:14.329 slat (nsec): min=1300, max=264769, avg=5823.64, stdev=1538.22 00:18:14.329 clat (usec): min=645, max=30344k, avg=4647.43, stdev=260564.22 00:18:14.329 lat (usec): min=651, max=30344k, avg=4653.26, stdev=260564.21 00:18:14.329 clat percentiles (usec): 00:18:14.329 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2212], 00:18:14.329 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:18:14.329 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 3589], 00:18:14.329 | 99.00th=[ 5866], 99.50th=[ 6194], 99.90th=[ 8455], 99.95th=[12387], 00:18:14.329 | 99.99th=[13173] 00:18:14.329 bw ( KiB/s): min=41968, max=114192, per=100.00%, avg=106693.29, stdev=16269.66, samples=59 00:18:14.329 iops : min=10492, max=28548, avg=26673.32, stdev=4067.41, samples=59 00:18:14.329 lat (usec) : 750=0.01%, 1000=0.01% 00:18:14.329 lat (msec) : 2=1.89%, 4=93.97%, 10=4.07%, 20=0.05%, >=2000=0.01% 00:18:14.329 cpu : usr=2.91%, sys=15.49%, ctx=52573, majf=0, minf=13 00:18:14.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:14.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.329 issued rwts: total=801171,799913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.329 00:18:14.329 Run status group 0 (all jobs): 00:18:14.329 READ: bw=52.2MiB/s (54.7MB/s), 52.2MiB/s-52.2MiB/s (54.7MB/s-54.7MB/s), io=3130MiB (3282MB), run=60002-60002msec 00:18:14.329 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=3125MiB (3276MB), run=60002-60002msec 00:18:14.329 00:18:14.329 Disk stats (read/write): 00:18:14.329 ublkb1: ios=798146/796817, merge=0/0, ticks=3902425/3596947, in_queue=7499373, util=99.88% 00:18:14.329 03:02:42 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.329 [2024-12-05 03:02:42.523833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:14.329 [2024-12-05 03:02:42.555212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:14.329 [2024-12-05 03:02:42.555378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:14.329 [2024-12-05 03:02:42.566098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:14.329 [2024-12-05 03:02:42.566204] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:14.329 [2024-12-05 03:02:42.566211] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.329 03:02:42 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.329 [2024-12-05 03:02:42.582176] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:14.329 [2024-12-05 03:02:42.587098] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:14.329 [2024-12-05 03:02:42.587130] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.329 03:02:42 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:14.329 03:02:42 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:14.329 03:02:42 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74191 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74191 ']' 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74191 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74191 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.329 killing process with pid 74191 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74191' 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74191 00:18:14.329 03:02:42 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74191 00:18:14.329 [2024-12-05 03:02:43.784722] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:14.329 [2024-12-05 03:02:43.784786] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:14.329 00:18:14.329 real 1m4.585s 00:18:14.329 user 1m45.777s 00:18:14.329 sys 0m23.637s 00:18:14.329 03:02:44 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.329 ************************************ 00:18:14.329 END TEST ublk_recovery 00:18:14.329 ************************************ 00:18:14.329 03:02:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.329 03:02:44 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:14.329 03:02:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:14.329 03:02:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.329 03:02:44 -- common/autotest_common.sh@10 -- # set +x 00:18:14.329 03:02:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:14.329 03:02:44 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:14.329 03:02:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:14.329 03:02:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.329 03:02:44 -- common/autotest_common.sh@10 -- # set +x 00:18:14.329 ************************************ 00:18:14.329 START TEST ftl 00:18:14.329 ************************************ 00:18:14.329 03:02:44 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:14.329 * Looking for test storage... 00:18:14.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.329 03:02:44 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:14.329 03:02:44 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:14.329 03:02:44 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:14.329 03:02:44 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:14.329 03:02:44 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.329 03:02:44 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.329 03:02:44 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.329 03:02:44 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.329 03:02:44 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.329 03:02:44 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.329 03:02:44 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.329 03:02:44 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.329 03:02:44 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.329 03:02:44 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.329 03:02:44 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.329 03:02:44 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:14.329 03:02:44 ftl -- scripts/common.sh@345 -- # : 1 00:18:14.329 03:02:44 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.330 03:02:44 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.330 03:02:44 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:14.330 03:02:44 ftl -- scripts/common.sh@353 -- # local d=1 00:18:14.330 03:02:44 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.330 03:02:44 ftl -- scripts/common.sh@355 -- # echo 1 00:18:14.330 03:02:44 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.330 03:02:44 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:14.330 03:02:44 ftl -- scripts/common.sh@353 -- # local d=2 00:18:14.330 03:02:44 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.330 03:02:44 ftl -- scripts/common.sh@355 -- # echo 2 00:18:14.330 03:02:44 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.330 03:02:44 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.330 03:02:44 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.330 03:02:44 ftl -- scripts/common.sh@368 -- # return 0 00:18:14.330 03:02:44 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.330 03:02:44 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:14.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.330 --rc genhtml_branch_coverage=1 00:18:14.330 --rc genhtml_function_coverage=1 00:18:14.330 --rc genhtml_legend=1 00:18:14.330 --rc geninfo_all_blocks=1 00:18:14.330 --rc geninfo_unexecuted_blocks=1 00:18:14.330 00:18:14.330 ' 00:18:14.330 03:02:44 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:14.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.330 --rc genhtml_branch_coverage=1 00:18:14.330 --rc genhtml_function_coverage=1 00:18:14.330 --rc genhtml_legend=1 00:18:14.330 --rc geninfo_all_blocks=1 00:18:14.330 --rc geninfo_unexecuted_blocks=1 00:18:14.330 00:18:14.330 ' 00:18:14.330 03:02:44 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:14.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.330 --rc genhtml_branch_coverage=1 00:18:14.330 --rc genhtml_function_coverage=1 00:18:14.330 --rc genhtml_legend=1 00:18:14.330 --rc geninfo_all_blocks=1 00:18:14.330 --rc geninfo_unexecuted_blocks=1 00:18:14.330 00:18:14.330 ' 00:18:14.330 03:02:44 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:14.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.330 --rc genhtml_branch_coverage=1 00:18:14.330 --rc genhtml_function_coverage=1 00:18:14.330 --rc genhtml_legend=1 00:18:14.330 --rc geninfo_all_blocks=1 00:18:14.330 --rc geninfo_unexecuted_blocks=1 00:18:14.330 00:18:14.330 ' 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:14.330 03:02:44 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:14.330 03:02:44 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.330 03:02:44 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.330 03:02:44 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:14.330 03:02:44 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:14.330 03:02:44 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.330 03:02:44 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:14.330 03:02:44 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:14.330 03:02:44 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.330 03:02:44 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.330 03:02:44 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:14.330 03:02:44 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:14.330 03:02:44 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:14.330 03:02:44 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:14.330 03:02:44 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:14.330 03:02:44 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:14.330 03:02:44 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.330 03:02:44 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.330 03:02:44 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:14.330 03:02:44 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:14.330 03:02:44 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:14.330 03:02:44 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:14.330 03:02:44 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:14.330 03:02:44 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:14.330 03:02:44 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:14.330 03:02:44 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:14.330 03:02:44 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.330 03:02:44 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:14.330 03:02:44 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.589 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.589 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.589 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.589 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.589 03:02:45 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75001 00:18:14.589 03:02:45 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75001 00:18:14.589 03:02:45 ftl -- common/autotest_common.sh@835 -- # '[' -z 75001 ']' 00:18:14.589 03:02:45 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.589 03:02:45 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.589 03:02:45 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:14.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.589 03:02:45 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.589 03:02:45 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.589 03:02:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:14.847 [2024-12-05 03:02:45.512124] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:14.847 [2024-12-05 03:02:45.512240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75001 ] 00:18:14.847 [2024-12-05 03:02:45.667745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.105 [2024-12-05 03:02:45.755031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.673 03:02:46 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.673 03:02:46 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:15.673 03:02:46 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:15.673 03:02:46 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:16.609 03:02:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:16.609 03:02:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:16.867 03:02:47 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:16.867 03:02:47 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:16.867 03:02:47 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@50 -- # break 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:17.125 03:02:47 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:17.383 03:02:48 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:17.383 03:02:48 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:17.383 03:02:48 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:17.383 03:02:48 ftl -- ftl/ftl.sh@63 -- # break 00:18:17.383 03:02:48 ftl -- ftl/ftl.sh@66 -- # killprocess 75001 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 75001 ']' 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@958 -- # kill -0 75001 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@959 -- # uname 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75001 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.383 killing process with pid 75001 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75001' 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@973 -- # kill 75001 00:18:17.383 03:02:48 ftl -- common/autotest_common.sh@978 -- # wait 75001 00:18:18.763 03:02:49 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:18.763 03:02:49 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:18.763 03:02:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:18.763 03:02:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.763 03:02:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:18.763 ************************************ 00:18:18.763 START TEST ftl_fio_basic 00:18:18.763 ************************************ 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:18.763 * Looking for test storage... 00:18:18.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.763 --rc genhtml_branch_coverage=1 00:18:18.763 --rc genhtml_function_coverage=1 00:18:18.763 --rc genhtml_legend=1 00:18:18.763 --rc geninfo_all_blocks=1 00:18:18.763 --rc geninfo_unexecuted_blocks=1 00:18:18.763 00:18:18.763 ' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.763 --rc genhtml_branch_coverage=1 00:18:18.763 --rc genhtml_function_coverage=1 00:18:18.763 --rc genhtml_legend=1 00:18:18.763 --rc geninfo_all_blocks=1 00:18:18.763 --rc geninfo_unexecuted_blocks=1 00:18:18.763 00:18:18.763 ' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.763 --rc genhtml_branch_coverage=1 00:18:18.763 --rc genhtml_function_coverage=1 00:18:18.763 --rc genhtml_legend=1 00:18:18.763 --rc geninfo_all_blocks=1 00:18:18.763 --rc geninfo_unexecuted_blocks=1 00:18:18.763 00:18:18.763 ' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.763 --rc genhtml_branch_coverage=1 00:18:18.763 --rc genhtml_function_coverage=1 00:18:18.763 --rc genhtml_legend=1 00:18:18.763 --rc geninfo_all_blocks=1 00:18:18.763 --rc geninfo_unexecuted_blocks=1 00:18:18.763 00:18:18.763 ' 00:18:18.763 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75128 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75128 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75128 ']' 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.764 03:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.764 [2024-12-05 03:02:49.570342] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:18.764 [2024-12-05 03:02:49.570572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75128 ] 00:18:19.024 [2024-12-05 03:02:49.723041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.024 [2024-12-05 03:02:49.817717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.024 [2024-12-05 03:02:49.817952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.024 [2024-12-05 03:02:49.817962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:19.655 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:19.935 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:20.194 { 00:18:20.194 "name": "nvme0n1", 00:18:20.194 "aliases": [ 00:18:20.194 "2ae92765-3b37-445b-9a6c-208f52356777" 00:18:20.194 ], 00:18:20.194 "product_name": "NVMe disk", 00:18:20.194 "block_size": 4096, 00:18:20.194 "num_blocks": 1310720, 00:18:20.194 "uuid": "2ae92765-3b37-445b-9a6c-208f52356777", 00:18:20.194 "numa_id": -1, 00:18:20.194 "assigned_rate_limits": { 00:18:20.194 "rw_ios_per_sec": 0, 00:18:20.194 "rw_mbytes_per_sec": 0, 00:18:20.194 "r_mbytes_per_sec": 0, 00:18:20.194 "w_mbytes_per_sec": 0 00:18:20.194 }, 00:18:20.194 "claimed": false, 00:18:20.194 "zoned": false, 00:18:20.194 "supported_io_types": { 00:18:20.194 "read": true, 00:18:20.194 "write": true, 00:18:20.194 "unmap": true, 00:18:20.194 "flush": true, 00:18:20.194 "reset": true, 00:18:20.194 "nvme_admin": true, 00:18:20.194 "nvme_io": true, 00:18:20.194 "nvme_io_md": false, 00:18:20.194 "write_zeroes": true, 00:18:20.194 "zcopy": false, 00:18:20.194 "get_zone_info": false, 00:18:20.194 "zone_management": false, 00:18:20.194 "zone_append": false, 00:18:20.194 "compare": true, 00:18:20.194 "compare_and_write": false, 00:18:20.194 "abort": true, 00:18:20.194 "seek_hole": false, 00:18:20.194 "seek_data": false, 00:18:20.194 "copy": true, 00:18:20.194 "nvme_iov_md": false 00:18:20.194 }, 00:18:20.194 "driver_specific": { 00:18:20.194 "nvme": [ 00:18:20.194 { 00:18:20.194 "pci_address": "0000:00:11.0", 00:18:20.194 "trid": { 00:18:20.194 "trtype": "PCIe", 00:18:20.194 "traddr": "0000:00:11.0" 00:18:20.194 }, 00:18:20.194 "ctrlr_data": { 00:18:20.194 "cntlid": 0, 00:18:20.194 "vendor_id": "0x1b36", 00:18:20.194 "model_number": "QEMU NVMe Ctrl", 00:18:20.194 "serial_number": "12341", 00:18:20.194 "firmware_revision": "8.0.0", 00:18:20.194 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:20.194 "oacs": { 00:18:20.194 "security": 0, 00:18:20.194 "format": 1, 00:18:20.194 "firmware": 0, 00:18:20.194 "ns_manage": 1 00:18:20.194 }, 00:18:20.194 "multi_ctrlr": false, 00:18:20.194 "ana_reporting": false 00:18:20.194 }, 00:18:20.194 "vs": { 00:18:20.194 "nvme_version": "1.4" 00:18:20.194 }, 00:18:20.194 "ns_data": { 00:18:20.194 "id": 1, 00:18:20.194 "can_share": false 00:18:20.194 } 00:18:20.194 } 00:18:20.194 ], 00:18:20.194 "mp_policy": "active_passive" 00:18:20.194 } 00:18:20.194 } 00:18:20.194 ]' 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:20.194 03:02:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:20.453 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:20.453 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fe5a3a91-5133-4603-bdd2-df82d9c0f1f3 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fe5a3a91-5133-4603-bdd2-df82d9c0f1f3 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=712bc737-0869-4a26-8f80-798d1cb2c500 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=712bc737-0869-4a26-8f80-798d1cb2c500 00:18:20.711 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:20.712 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:20.712 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=712bc737-0869-4a26-8f80-798d1cb2c500 00:18:20.712 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:20.712 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:20.712 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:20.712 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:20.970 { 00:18:20.970 "name": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:20.970 "aliases": [ 00:18:20.970 "lvs/nvme0n1p0" 00:18:20.970 ], 00:18:20.970 "product_name": "Logical Volume", 00:18:20.970 "block_size": 4096, 00:18:20.970 "num_blocks": 26476544, 00:18:20.970 "uuid": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:20.970 "assigned_rate_limits": { 00:18:20.970 "rw_ios_per_sec": 0, 00:18:20.970 "rw_mbytes_per_sec": 0, 00:18:20.970 "r_mbytes_per_sec": 0, 00:18:20.970 "w_mbytes_per_sec": 0 00:18:20.970 }, 00:18:20.970 "claimed": false, 00:18:20.970 "zoned": false, 00:18:20.970 "supported_io_types": { 00:18:20.970 "read": true, 00:18:20.970 "write": true, 00:18:20.970 "unmap": true, 00:18:20.970 "flush": false, 00:18:20.970 "reset": true, 00:18:20.970 "nvme_admin": false, 00:18:20.970 "nvme_io": false, 00:18:20.970 "nvme_io_md": false, 00:18:20.970 "write_zeroes": true, 00:18:20.970 "zcopy": false, 00:18:20.970 "get_zone_info": false, 00:18:20.970 "zone_management": false, 00:18:20.970 "zone_append": false, 00:18:20.970 "compare": false, 00:18:20.970 "compare_and_write": false, 00:18:20.970 "abort": false, 00:18:20.970 "seek_hole": true, 00:18:20.970 "seek_data": true, 00:18:20.970 "copy": false, 00:18:20.970 "nvme_iov_md": false 00:18:20.970 }, 00:18:20.970 "driver_specific": { 00:18:20.970 "lvol": { 00:18:20.970 "lvol_store_uuid": "fe5a3a91-5133-4603-bdd2-df82d9c0f1f3", 00:18:20.970 "base_bdev": "nvme0n1", 00:18:20.970 "thin_provision": true, 00:18:20.970 "num_allocated_clusters": 0, 00:18:20.970 "snapshot": false, 00:18:20.970 "clone": false, 00:18:20.970 "esnap_clone": false 00:18:20.970 } 00:18:20.970 } 00:18:20.970 } 00:18:20.970 ]' 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:20.970 03:02:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=712bc737-0869-4a26-8f80-798d1cb2c500 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.230 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:21.488 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.488 { 00:18:21.488 "name": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:21.488 "aliases": [ 00:18:21.488 "lvs/nvme0n1p0" 00:18:21.488 ], 00:18:21.488 "product_name": "Logical Volume", 00:18:21.488 "block_size": 4096, 00:18:21.488 "num_blocks": 26476544, 00:18:21.488 "uuid": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:21.488 "assigned_rate_limits": { 00:18:21.488 "rw_ios_per_sec": 0, 00:18:21.488 "rw_mbytes_per_sec": 0, 00:18:21.488 "r_mbytes_per_sec": 0, 00:18:21.488 "w_mbytes_per_sec": 0 00:18:21.488 }, 00:18:21.488 "claimed": false, 00:18:21.488 "zoned": false, 00:18:21.488 "supported_io_types": { 00:18:21.488 "read": true, 00:18:21.488 "write": true, 00:18:21.488 "unmap": true, 00:18:21.488 "flush": false, 00:18:21.488 "reset": true, 00:18:21.488 "nvme_admin": false, 00:18:21.488 "nvme_io": false, 00:18:21.488 "nvme_io_md": false, 00:18:21.488 "write_zeroes": true, 00:18:21.488 "zcopy": false, 00:18:21.488 "get_zone_info": false, 00:18:21.488 "zone_management": false, 00:18:21.488 "zone_append": false, 00:18:21.488 "compare": false, 00:18:21.488 "compare_and_write": false, 00:18:21.488 "abort": false, 00:18:21.488 "seek_hole": true, 00:18:21.489 "seek_data": true, 00:18:21.489 "copy": false, 00:18:21.489 "nvme_iov_md": false 00:18:21.489 }, 00:18:21.489 "driver_specific": { 00:18:21.489 "lvol": { 00:18:21.489 "lvol_store_uuid": "fe5a3a91-5133-4603-bdd2-df82d9c0f1f3", 00:18:21.489 "base_bdev": "nvme0n1", 00:18:21.489 "thin_provision": true, 00:18:21.489 "num_allocated_clusters": 0, 00:18:21.489 "snapshot": false, 00:18:21.489 "clone": false, 00:18:21.489 "esnap_clone": false 00:18:21.489 } 00:18:21.489 } 00:18:21.489 } 00:18:21.489 ]' 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:21.489 03:02:52 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:21.747 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=712bc737-0869-4a26-8f80-798d1cb2c500 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.747 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 712bc737-0869-4a26-8f80-798d1cb2c500 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.006 { 00:18:22.006 "name": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:22.006 "aliases": [ 00:18:22.006 "lvs/nvme0n1p0" 00:18:22.006 ], 00:18:22.006 "product_name": "Logical Volume", 00:18:22.006 "block_size": 4096, 00:18:22.006 "num_blocks": 26476544, 00:18:22.006 "uuid": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:22.006 "assigned_rate_limits": { 00:18:22.006 "rw_ios_per_sec": 0, 00:18:22.006 "rw_mbytes_per_sec": 0, 00:18:22.006 "r_mbytes_per_sec": 0, 00:18:22.006 "w_mbytes_per_sec": 0 00:18:22.006 }, 00:18:22.006 "claimed": false, 00:18:22.006 "zoned": false, 00:18:22.006 "supported_io_types": { 00:18:22.006 "read": true, 00:18:22.006 "write": true, 00:18:22.006 "unmap": true, 00:18:22.006 "flush": false, 00:18:22.006 "reset": true, 00:18:22.006 "nvme_admin": false, 00:18:22.006 "nvme_io": false, 00:18:22.006 "nvme_io_md": false, 00:18:22.006 "write_zeroes": true, 00:18:22.006 "zcopy": false, 00:18:22.006 "get_zone_info": false, 00:18:22.006 "zone_management": false, 00:18:22.006 "zone_append": false, 00:18:22.006 "compare": false, 00:18:22.006 "compare_and_write": false, 00:18:22.006 "abort": false, 00:18:22.006 "seek_hole": true, 00:18:22.006 "seek_data": true, 00:18:22.006 "copy": false, 00:18:22.006 "nvme_iov_md": false 00:18:22.006 }, 00:18:22.006 "driver_specific": { 00:18:22.006 "lvol": { 00:18:22.006 "lvol_store_uuid": "fe5a3a91-5133-4603-bdd2-df82d9c0f1f3", 00:18:22.006 "base_bdev": "nvme0n1", 00:18:22.006 "thin_provision": true, 00:18:22.006 "num_allocated_clusters": 0, 00:18:22.006 "snapshot": false, 00:18:22.006 "clone": false, 00:18:22.006 "esnap_clone": false 00:18:22.006 } 00:18:22.006 } 00:18:22.006 } 00:18:22.006 ]' 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:22.006 03:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 712bc737-0869-4a26-8f80-798d1cb2c500 -c nvc0n1p0 --l2p_dram_limit 60 00:18:22.266 [2024-12-05 03:02:52.984532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.266 [2024-12-05 03:02:52.984573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.267 [2024-12-05 03:02:52.984587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.267 [2024-12-05 03:02:52.984595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.984649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.984658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.267 [2024-12-05 03:02:52.984668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:22.267 [2024-12-05 03:02:52.984674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.984712] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.267 [2024-12-05 03:02:52.985276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.267 [2024-12-05 03:02:52.985294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.985300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.267 [2024-12-05 03:02:52.985309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:18:22.267 [2024-12-05 03:02:52.985315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.985347] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6b735fa4-2457-4503-9896-8ac700d9e4c0 00:18:22.267 [2024-12-05 03:02:52.986637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.986768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:22.267 [2024-12-05 03:02:52.986783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:22.267 [2024-12-05 03:02:52.986793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.993566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.993657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.267 [2024-12-05 03:02:52.993700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.685 ms 00:18:22.267 [2024-12-05 03:02:52.993721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.993822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.993861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.267 [2024-12-05 03:02:52.993898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:22.267 [2024-12-05 03:02:52.993917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.993971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.993992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.267 [2024-12-05 03:02:52.994008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.267 [2024-12-05 03:02:52.994025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.994060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.267 [2024-12-05 03:02:52.997378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.997466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.267 [2024-12-05 03:02:52.997541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.320 ms 00:18:22.267 [2024-12-05 03:02:52.997560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.997612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.997672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.267 [2024-12-05 03:02:52.997693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.267 [2024-12-05 03:02:52.997708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.997749] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:22.267 [2024-12-05 03:02:52.997889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.267 [2024-12-05 03:02:52.997922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.267 [2024-12-05 03:02:52.997974] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.267 [2024-12-05 03:02:52.998032] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.267 [2024-12-05 03:02:52.998057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.267 [2024-12-05 03:02:52.998093] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.267 [2024-12-05 03:02:52.998138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.267 [2024-12-05 03:02:52.998157] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.267 [2024-12-05 03:02:52.998172] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.267 [2024-12-05 03:02:52.998189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.998205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.267 [2024-12-05 03:02:52.998258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:18:22.267 [2024-12-05 03:02:52.998276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.998361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.267 [2024-12-05 03:02:52.998397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.267 [2024-12-05 03:02:52.998413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:22.267 [2024-12-05 03:02:52.998454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.267 [2024-12-05 03:02:52.998571] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.267 [2024-12-05 03:02:52.998633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.267 [2024-12-05 03:02:52.998657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.267 [2024-12-05 03:02:52.998672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.267 [2024-12-05 03:02:52.998689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.267 [2024-12-05 03:02:52.998778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.267 [2024-12-05 03:02:52.998797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.267 [2024-12-05 03:02:52.998811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.267 [2024-12-05 03:02:52.998829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.267 [2024-12-05 03:02:52.998843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.267 [2024-12-05 03:02:52.998935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.267 [2024-12-05 03:02:52.998952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.267 [2024-12-05 03:02:52.998969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.267 [2024-12-05 03:02:52.998983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.267 [2024-12-05 03:02:52.998999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.267 [2024-12-05 03:02:52.999015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.267 [2024-12-05 03:02:52.999123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.267 [2024-12-05 03:02:52.999139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.267 [2024-12-05 03:02:52.999170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.267 [2024-12-05 03:02:52.999203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.267 [2024-12-05 03:02:52.999278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.267 [2024-12-05 03:02:52.999311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.267 [2024-12-05 03:02:52.999327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.267 [2024-12-05 03:02:52.999357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.267 [2024-12-05 03:02:52.999371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.267 [2024-12-05 03:02:52.999438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.267 [2024-12-05 03:02:52.999459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.267 [2024-12-05 03:02:52.999489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.267 [2024-12-05 03:02:52.999506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.267 [2024-12-05 03:02:52.999519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.267 [2024-12-05 03:02:52.999535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.267 [2024-12-05 03:02:52.999582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.267 [2024-12-05 03:02:52.999597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.268 [2024-12-05 03:02:52.999611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.268 [2024-12-05 03:02:52.999627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.268 [2024-12-05 03:02:52.999640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.268 [2024-12-05 03:02:52.999689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.268 [2024-12-05 03:02:52.999706] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.268 [2024-12-05 03:02:52.999723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.268 [2024-12-05 03:02:52.999739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.268 [2024-12-05 03:02:52.999755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.268 [2024-12-05 03:02:52.999770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.268 [2024-12-05 03:02:52.999878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.268 [2024-12-05 03:02:52.999896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.268 [2024-12-05 03:02:52.999912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.268 [2024-12-05 03:02:52.999927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.268 [2024-12-05 03:02:52.999948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.268 [2024-12-05 03:02:52.999964] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.268 [2024-12-05 03:02:53.000093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.000119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.268 [2024-12-05 03:02:53.000144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.268 [2024-12-05 03:02:53.000198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.268 [2024-12-05 03:02:53.000225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.268 [2024-12-05 03:02:53.000255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.268 [2024-12-05 03:02:53.000280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.268 [2024-12-05 03:02:53.000329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.268 [2024-12-05 03:02:53.000506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.268 [2024-12-05 03:02:53.000530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.268 [2024-12-05 03:02:53.000575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.000598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.000647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.000674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.000697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.268 [2024-12-05 03:02:53.000741] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.268 [2024-12-05 03:02:53.000793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.000965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.268 [2024-12-05 03:02:53.001011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.268 [2024-12-05 03:02:53.001036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.268 [2024-12-05 03:02:53.001060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.268 [2024-12-05 03:02:53.001122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.268 [2024-12-05 03:02:53.001144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.268 [2024-12-05 03:02:53.001160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:18:22.268 [2024-12-05 03:02:53.001177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.268 [2024-12-05 03:02:53.001242] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:22.268 [2024-12-05 03:02:53.001272] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:25.575 [2024-12-05 03:02:55.686055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.686283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:25.575 [2024-12-05 03:02:55.686383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2684.801 ms 00:18:25.575 [2024-12-05 03:02:55.686413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.714585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.714749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.575 [2024-12-05 03:02:55.714836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.907 ms 00:18:25.575 [2024-12-05 03:02:55.714862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.715003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.715151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:25.575 [2024-12-05 03:02:55.715165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:25.575 [2024-12-05 03:02:55.715178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.758629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.758671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.575 [2024-12-05 03:02:55.758687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.400 ms 00:18:25.575 [2024-12-05 03:02:55.758698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.758743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.758754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.575 [2024-12-05 03:02:55.758763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:25.575 [2024-12-05 03:02:55.758773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.759273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.759294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.575 [2024-12-05 03:02:55.759304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:18:25.575 [2024-12-05 03:02:55.759316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.759443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.759455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.575 [2024-12-05 03:02:55.759464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:25.575 [2024-12-05 03:02:55.759475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.775481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.775512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.575 [2024-12-05 03:02:55.775522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.977 ms 00:18:25.575 [2024-12-05 03:02:55.775532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.787726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:25.575 [2024-12-05 03:02:55.804955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.804985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:25.575 [2024-12-05 03:02:55.805000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.334 ms 00:18:25.575 [2024-12-05 03:02:55.805009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.861137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.861182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:25.575 [2024-12-05 03:02:55.861200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.097 ms 00:18:25.575 [2024-12-05 03:02:55.861210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.861407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.861425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:25.575 [2024-12-05 03:02:55.861440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:25.575 [2024-12-05 03:02:55.861448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.884448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.884634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:25.575 [2024-12-05 03:02:55.884656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.922 ms 00:18:25.575 [2024-12-05 03:02:55.884665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.907216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.907331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:25.575 [2024-12-05 03:02:55.907351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.505 ms 00:18:25.575 [2024-12-05 03:02:55.907359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.907947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.907965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:25.575 [2024-12-05 03:02:55.907976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:25.575 [2024-12-05 03:02:55.907983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.575 [2024-12-05 03:02:55.976879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.575 [2024-12-05 03:02:55.976912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:25.575 [2024-12-05 03:02:55.976928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.849 ms 00:18:25.576 [2024-12-05 03:02:55.976939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.576 [2024-12-05 03:02:56.002452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.576 [2024-12-05 03:02:56.002575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:25.576 [2024-12-05 03:02:56.002595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.428 ms 00:18:25.576 [2024-12-05 03:02:56.002604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.576 [2024-12-05 03:02:56.026066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.576 [2024-12-05 03:02:56.026104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:25.576 [2024-12-05 03:02:56.026116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.421 ms 00:18:25.576 [2024-12-05 03:02:56.026123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.576 [2024-12-05 03:02:56.050762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.576 [2024-12-05 03:02:56.050793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:25.576 [2024-12-05 03:02:56.050806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.591 ms 00:18:25.576 [2024-12-05 03:02:56.050813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.576 [2024-12-05 03:02:56.050864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.576 [2024-12-05 03:02:56.050874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:25.576 [2024-12-05 03:02:56.050890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:25.576 [2024-12-05 03:02:56.050898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.576 [2024-12-05 03:02:56.050985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.576 [2024-12-05 03:02:56.050996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:25.576 [2024-12-05 03:02:56.051006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:25.576 [2024-12-05 03:02:56.051014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.576 [2024-12-05 03:02:56.052175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3067.157 ms, result 0 00:18:25.576 { 00:18:25.576 "name": "ftl0", 00:18:25.576 "uuid": "6b735fa4-2457-4503-9896-8ac700d9e4c0" 00:18:25.576 } 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:25.576 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:25.835 [ 00:18:25.835 { 00:18:25.835 "name": "ftl0", 00:18:25.835 "aliases": [ 00:18:25.835 "6b735fa4-2457-4503-9896-8ac700d9e4c0" 00:18:25.835 ], 00:18:25.835 "product_name": "FTL disk", 00:18:25.835 "block_size": 4096, 00:18:25.835 "num_blocks": 20971520, 00:18:25.835 "uuid": "6b735fa4-2457-4503-9896-8ac700d9e4c0", 00:18:25.835 "assigned_rate_limits": { 00:18:25.835 "rw_ios_per_sec": 0, 00:18:25.835 "rw_mbytes_per_sec": 0, 00:18:25.835 "r_mbytes_per_sec": 0, 00:18:25.835 "w_mbytes_per_sec": 0 00:18:25.835 }, 00:18:25.835 "claimed": false, 00:18:25.835 "zoned": false, 00:18:25.835 "supported_io_types": { 00:18:25.835 "read": true, 00:18:25.835 "write": true, 00:18:25.835 "unmap": true, 00:18:25.835 "flush": true, 00:18:25.835 "reset": false, 00:18:25.835 "nvme_admin": false, 00:18:25.835 "nvme_io": false, 00:18:25.835 "nvme_io_md": false, 00:18:25.835 "write_zeroes": true, 00:18:25.835 "zcopy": false, 00:18:25.835 "get_zone_info": false, 00:18:25.835 "zone_management": false, 00:18:25.835 "zone_append": false, 00:18:25.835 "compare": false, 00:18:25.835 "compare_and_write": false, 00:18:25.836 "abort": false, 00:18:25.836 "seek_hole": false, 00:18:25.836 "seek_data": false, 00:18:25.836 "copy": false, 00:18:25.836 "nvme_iov_md": false 00:18:25.836 }, 00:18:25.836 "driver_specific": { 00:18:25.836 "ftl": { 00:18:25.836 "base_bdev": "712bc737-0869-4a26-8f80-798d1cb2c500", 00:18:25.836 "cache": "nvc0n1p0" 00:18:25.836 } 00:18:25.836 } 00:18:25.836 } 00:18:25.836 ] 00:18:25.836 03:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:25.836 03:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:25.836 03:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:25.836 03:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:25.836 03:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:26.095 [2024-12-05 03:02:56.848632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.848672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.095 [2024-12-05 03:02:56.848684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.095 [2024-12-05 03:02:56.848692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.848721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:26.095 [2024-12-05 03:02:56.850983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.851009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.095 [2024-12-05 03:02:56.851020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:18:26.095 [2024-12-05 03:02:56.851027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.851398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.851413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.095 [2024-12-05 03:02:56.851422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:18:26.095 [2024-12-05 03:02:56.851428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.854111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.854132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.095 [2024-12-05 03:02:56.854141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:18:26.095 [2024-12-05 03:02:56.854147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.858840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.858862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.095 [2024-12-05 03:02:56.858872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:18:26.095 [2024-12-05 03:02:56.858879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.877435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.877463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.095 [2024-12-05 03:02:56.877486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:18:26.095 [2024-12-05 03:02:56.877492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.890327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.890356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.095 [2024-12-05 03:02:56.890370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.793 ms 00:18:26.095 [2024-12-05 03:02:56.890376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.890529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.890538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.095 [2024-12-05 03:02:56.890546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:26.095 [2024-12-05 03:02:56.890552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.908540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.908658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:26.095 [2024-12-05 03:02:56.908674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:18:26.095 [2024-12-05 03:02:56.908680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.095 [2024-12-05 03:02:56.926458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.095 [2024-12-05 03:02:56.926482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:26.095 [2024-12-05 03:02:56.926492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.744 ms 00:18:26.095 [2024-12-05 03:02:56.926498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.356 [2024-12-05 03:02:56.943983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.356 [2024-12-05 03:02:56.944096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.356 [2024-12-05 03:02:56.944112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.444 ms 00:18:26.356 [2024-12-05 03:02:56.944118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.356 [2024-12-05 03:02:56.961560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.356 [2024-12-05 03:02:56.961584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.357 [2024-12-05 03:02:56.961593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.368 ms 00:18:26.357 [2024-12-05 03:02:56.961599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.357 [2024-12-05 03:02:56.961640] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.357 [2024-12-05 03:02:56.961651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.961995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.357 [2024-12-05 03:02:56.962237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.358 [2024-12-05 03:02:56.962374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.358 [2024-12-05 03:02:56.962381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6b735fa4-2457-4503-9896-8ac700d9e4c0 00:18:26.358 [2024-12-05 03:02:56.962388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.358 [2024-12-05 03:02:56.962397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.358 [2024-12-05 03:02:56.962404] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.358 [2024-12-05 03:02:56.962413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.358 [2024-12-05 03:02:56.962419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.358 [2024-12-05 03:02:56.962436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.358 [2024-12-05 03:02:56.962442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.358 [2024-12-05 03:02:56.962449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.358 [2024-12-05 03:02:56.962453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.358 [2024-12-05 03:02:56.962460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.358 [2024-12-05 03:02:56.962466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.358 [2024-12-05 03:02:56.962475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:18:26.358 [2024-12-05 03:02:56.962481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:56.972635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.358 [2024-12-05 03:02:56.972662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.358 [2024-12-05 03:02:56.972672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.103 ms 00:18:26.358 [2024-12-05 03:02:56.972678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:56.972973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.358 [2024-12-05 03:02:56.972981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.358 [2024-12-05 03:02:56.972990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:26.358 [2024-12-05 03:02:56.972996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.009758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.009785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.358 [2024-12-05 03:02:57.009795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.009802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.009857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.009863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.358 [2024-12-05 03:02:57.009871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.009877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.009951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.009962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.358 [2024-12-05 03:02:57.009970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.009976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.010000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.010008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.358 [2024-12-05 03:02:57.010015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.010020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.075771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.075918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.358 [2024-12-05 03:02:57.075937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.075945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.127320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.127433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.358 [2024-12-05 03:02:57.127478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.127496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.127597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.127647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.358 [2024-12-05 03:02:57.127672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.127688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.127785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.127806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.358 [2024-12-05 03:02:57.127862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.127880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.127989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.128037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.358 [2024-12-05 03:02:57.128057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.128089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.128175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.128197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.358 [2024-12-05 03:02:57.128214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.128233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.128303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.128634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.358 [2024-12-05 03:02:57.128734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.128757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.128837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.358 [2024-12-05 03:02:57.128883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.358 [2024-12-05 03:02:57.128901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.358 [2024-12-05 03:02:57.128981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.358 [2024-12-05 03:02:57.129186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.528 ms, result 0 00:18:26.358 true 00:18:26.358 03:02:57 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75128 00:18:26.358 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75128 ']' 00:18:26.358 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75128 00:18:26.358 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:26.358 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.359 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75128 00:18:26.359 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.359 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.359 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75128' 00:18:26.359 killing process with pid 75128 00:18:26.359 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75128 00:18:26.359 03:02:57 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75128 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.943 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:32.944 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.944 03:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:32.944 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:32.944 fio-3.35 00:18:32.944 Starting 1 thread 00:18:38.219 00:18:38.219 test: (groupid=0, jobs=1): err= 0: pid=75316: Thu Dec 5 03:03:08 2024 00:18:38.219 read: IOPS=848, BW=56.3MiB/s (59.1MB/s)(255MiB/4517msec) 00:18:38.219 slat (usec): min=4, max=266, avg= 6.82, stdev= 5.29 00:18:38.219 clat (usec): min=263, max=2318, avg=533.39, stdev=228.35 00:18:38.219 lat (usec): min=268, max=2328, avg=540.21, stdev=229.96 00:18:38.219 clat percentiles (usec): 00:18:38.219 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 326], 00:18:38.219 | 30.00th=[ 334], 40.00th=[ 396], 50.00th=[ 490], 60.00th=[ 502], 00:18:38.219 | 70.00th=[ 603], 80.00th=[ 807], 90.00th=[ 881], 95.00th=[ 938], 00:18:38.219 | 99.00th=[ 1106], 99.50th=[ 1156], 99.90th=[ 1352], 99.95th=[ 1483], 00:18:38.219 | 99.99th=[ 2311] 00:18:38.219 write: IOPS=855, BW=56.8MiB/s (59.5MB/s)(256MiB/4509msec); 0 zone resets 00:18:38.219 slat (nsec): min=15040, max=85417, avg=21463.32, stdev=6039.68 00:18:38.219 clat (usec): min=298, max=1759, avg=599.31, stdev=268.23 00:18:38.219 lat (usec): min=315, max=1779, avg=620.77, stdev=270.52 00:18:38.219 clat percentiles (usec): 00:18:38.219 | 1.00th=[ 314], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 355], 00:18:38.219 | 30.00th=[ 371], 40.00th=[ 453], 50.00th=[ 523], 60.00th=[ 578], 00:18:38.219 | 70.00th=[ 701], 80.00th=[ 889], 90.00th=[ 979], 95.00th=[ 1029], 00:18:38.219 | 99.00th=[ 1532], 99.50th=[ 1598], 99.90th=[ 1745], 99.95th=[ 1762], 00:18:38.219 | 99.99th=[ 1762] 00:18:38.219 bw ( KiB/s): min=34408, max=89488, per=99.99%, avg=58147.56, stdev=17110.63, samples=9 00:18:38.219 iops : min= 506, max= 1316, avg=855.11, stdev=251.63, samples=9 00:18:38.219 lat (usec) : 500=51.22%, 750=22.45%, 1000=21.58% 00:18:38.219 lat (msec) : 2=4.75%, 4=0.01% 00:18:38.219 cpu : usr=98.43%, sys=0.44%, ctx=12, majf=0, minf=1169 00:18:38.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:38.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.219 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:38.219 00:18:38.219 Run status group 0 (all jobs): 00:18:38.219 READ: bw=56.3MiB/s (59.1MB/s), 56.3MiB/s-56.3MiB/s (59.1MB/s-59.1MB/s), io=255MiB (267MB), run=4517-4517msec 00:18:38.219 WRITE: bw=56.8MiB/s (59.5MB/s), 56.8MiB/s-56.8MiB/s (59.5MB/s-59.5MB/s), io=256MiB (269MB), run=4509-4509msec 00:18:39.608 ----------------------------------------------------- 00:18:39.608 Suppressions used: 00:18:39.608 count bytes template 00:18:39.608 1 5 /usr/src/fio/parse.c 00:18:39.608 1 8 libtcmalloc_minimal.so 00:18:39.608 1 904 libcrypto.so 00:18:39.608 ----------------------------------------------------- 00:18:39.608 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.608 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:39.868 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.868 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.868 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:39.868 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.868 03:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:39.868 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:39.868 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:39.868 fio-3.35 00:18:39.868 Starting 2 threads 00:19:06.421 00:19:06.421 first_half: (groupid=0, jobs=1): err= 0: pid=75429: Thu Dec 5 03:03:33 2024 00:19:06.421 read: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(255MiB/21975msec) 00:19:06.421 slat (nsec): min=3064, max=35725, avg=4593.13, stdev=1284.81 00:19:06.421 clat (usec): min=613, max=275299, avg=34492.47, stdev=16128.82 00:19:06.421 lat (usec): min=618, max=275302, avg=34497.06, stdev=16128.93 00:19:06.421 clat percentiles (msec): 00:19:06.421 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 30], 00:19:06.421 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:06.421 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 49], 00:19:06.421 | 99.00th=[ 125], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 234], 00:19:06.421 | 99.99th=[ 268] 00:19:06.421 write: IOPS=3611, BW=14.1MiB/s (14.8MB/s)(256MiB/18146msec); 0 zone resets 00:19:06.421 slat (usec): min=3, max=459, avg= 6.13, stdev= 4.32 00:19:06.421 clat (usec): min=326, max=77458, avg=8526.96, stdev=13101.35 00:19:06.421 lat (usec): min=332, max=77462, avg=8533.10, stdev=13101.57 00:19:06.421 clat percentiles (usec): 00:19:06.421 | 1.00th=[ 660], 5.00th=[ 783], 10.00th=[ 906], 20.00th=[ 1254], 00:19:06.421 | 30.00th=[ 2606], 40.00th=[ 3556], 50.00th=[ 4621], 60.00th=[ 5407], 00:19:06.421 | 70.00th=[ 6063], 80.00th=[ 9896], 90.00th=[19006], 95.00th=[39060], 00:19:06.421 | 99.00th=[64750], 99.50th=[66847], 99.90th=[73925], 99.95th=[74974], 00:19:06.421 | 99.99th=[76022] 00:19:06.421 bw ( KiB/s): min= 704, max=42352, per=95.22%, avg=24966.10, stdev=13758.19, samples=21 00:19:06.421 iops : min= 176, max=10588, avg=6241.52, stdev=3439.55, samples=21 00:19:06.421 lat (usec) : 500=0.04%, 750=1.81%, 1000=5.10% 00:19:06.421 lat (msec) : 2=5.87%, 4=9.92%, 10=17.93%, 20=5.86%, 50=48.83% 00:19:06.421 lat (msec) : 100=3.82%, 250=0.80%, 500=0.02% 00:19:06.421 cpu : usr=99.25%, sys=0.14%, ctx=33, majf=0, minf=5601 00:19:06.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:06.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.421 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.421 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.421 second_half: (groupid=0, jobs=1): err= 0: pid=75430: Thu Dec 5 03:03:33 2024 00:19:06.421 read: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(255MiB/22128msec) 00:19:06.421 slat (nsec): min=3157, max=38068, avg=4972.12, stdev=1842.60 00:19:06.421 clat (usec): min=659, max=279790, avg=34098.92, stdev=18670.98 00:19:06.421 lat (usec): min=666, max=279797, avg=34103.89, stdev=18671.18 00:19:06.421 clat percentiles (msec): 00:19:06.421 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:19:06.421 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:06.421 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 46], 00:19:06.421 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 203], 99.95th=[ 226], 00:19:06.421 | 99.99th=[ 275] 00:19:06.421 write: IOPS=3277, BW=12.8MiB/s (13.4MB/s)(256MiB/19997msec); 0 zone resets 00:19:06.421 slat (usec): min=3, max=1044, avg= 6.87, stdev= 6.64 00:19:06.421 clat (usec): min=350, max=76677, avg=9215.99, stdev=14046.16 00:19:06.421 lat (usec): min=358, max=76683, avg=9222.87, stdev=14046.60 00:19:06.421 clat percentiles (usec): 00:19:06.421 | 1.00th=[ 635], 5.00th=[ 758], 10.00th=[ 857], 20.00th=[ 1057], 00:19:06.421 | 30.00th=[ 1352], 40.00th=[ 2573], 50.00th=[ 3752], 60.00th=[ 5211], 00:19:06.421 | 70.00th=[ 6325], 80.00th=[14746], 90.00th=[26346], 95.00th=[42206], 00:19:06.421 | 99.00th=[65274], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:19:06.421 | 99.99th=[76022] 00:19:06.421 bw ( KiB/s): min= 2264, max=65216, per=90.89%, avg=23831.27, stdev=15698.43, samples=22 00:19:06.421 iops : min= 566, max=16304, avg=5957.82, stdev=3924.61, samples=22 00:19:06.421 lat (usec) : 500=0.02%, 750=2.33%, 1000=6.19% 00:19:06.421 lat (msec) : 2=8.89%, 4=8.44%, 10=14.48%, 20=5.15%, 50=49.79% 00:19:06.421 lat (msec) : 100=3.68%, 250=1.02%, 500=0.01% 00:19:06.421 cpu : usr=99.30%, sys=0.14%, ctx=42, majf=0, minf=5510 00:19:06.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:06.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.421 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.421 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.421 00:19:06.421 Run status group 0 (all jobs): 00:19:06.421 READ: bw=23.1MiB/s (24.2MB/s), 11.5MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=510MiB (535MB), run=21975-22128msec 00:19:06.421 WRITE: bw=25.6MiB/s (26.8MB/s), 12.8MiB/s-14.1MiB/s (13.4MB/s-14.8MB/s), io=512MiB (537MB), run=18146-19997msec 00:19:06.421 ----------------------------------------------------- 00:19:06.421 Suppressions used: 00:19:06.421 count bytes template 00:19:06.421 2 10 /usr/src/fio/parse.c 00:19:06.421 4 384 /usr/src/fio/iolog.c 00:19:06.421 1 8 libtcmalloc_minimal.so 00:19:06.421 1 904 libcrypto.so 00:19:06.421 ----------------------------------------------------- 00:19:06.421 00:19:06.421 03:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:06.421 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.421 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:06.421 03:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:06.421 03:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:06.422 03:03:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:06.422 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:06.422 fio-3.35 00:19:06.422 Starting 1 thread 00:19:21.322 00:19:21.322 test: (groupid=0, jobs=1): err= 0: pid=75723: Thu Dec 5 03:03:51 2024 00:19:21.322 read: IOPS=8272, BW=32.3MiB/s (33.9MB/s)(255MiB/7882msec) 00:19:21.322 slat (nsec): min=3103, max=41805, avg=4778.28, stdev=1109.64 00:19:21.322 clat (usec): min=526, max=30062, avg=15464.87, stdev=1969.20 00:19:21.322 lat (usec): min=531, max=30067, avg=15469.64, stdev=1969.25 00:19:21.322 clat percentiles (usec): 00:19:21.322 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13960], 20.00th=[14091], 00:19:21.322 | 30.00th=[14353], 40.00th=[14484], 50.00th=[15008], 60.00th=[15401], 00:19:21.322 | 70.00th=[15926], 80.00th=[16319], 90.00th=[17171], 95.00th=[19006], 00:19:21.322 | 99.00th=[23987], 99.50th=[24511], 99.90th=[26870], 99.95th=[28443], 00:19:21.322 | 99.99th=[30016] 00:19:21.322 write: IOPS=11.4k, BW=44.7MiB/s (46.8MB/s)(256MiB/5732msec); 0 zone resets 00:19:21.322 slat (usec): min=4, max=207, avg= 6.82, stdev= 2.70 00:19:21.322 clat (usec): min=533, max=59024, avg=11150.12, stdev=12465.37 00:19:21.322 lat (usec): min=538, max=59030, avg=11156.95, stdev=12465.36 00:19:21.322 clat percentiles (usec): 00:19:21.322 | 1.00th=[ 725], 5.00th=[ 930], 10.00th=[ 1074], 20.00th=[ 1270], 00:19:21.322 | 30.00th=[ 1450], 40.00th=[ 1926], 50.00th=[ 7111], 60.00th=[ 9765], 00:19:21.322 | 70.00th=[13173], 80.00th=[16712], 90.00th=[36439], 95.00th=[38536], 00:19:21.322 | 99.00th=[42206], 99.50th=[43779], 99.90th=[47973], 99.95th=[49021], 00:19:21.323 | 99.99th=[57410] 00:19:21.323 bw ( KiB/s): min=19488, max=65264, per=95.53%, avg=43690.67, stdev=10642.85, samples=12 00:19:21.323 iops : min= 4872, max=16316, avg=10922.67, stdev=2660.64, samples=12 00:19:21.323 lat (usec) : 750=0.68%, 1000=2.84% 00:19:21.323 lat (msec) : 2=16.72%, 4=0.80%, 10=9.46%, 20=59.14%, 50=10.35% 00:19:21.323 lat (msec) : 100=0.02% 00:19:21.323 cpu : usr=99.10%, sys=0.17%, ctx=16, majf=0, minf=5565 00:19:21.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:21.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.323 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.323 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.323 00:19:21.323 Run status group 0 (all jobs): 00:19:21.323 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=255MiB (267MB), run=7882-7882msec 00:19:21.323 WRITE: bw=44.7MiB/s (46.8MB/s), 44.7MiB/s-44.7MiB/s (46.8MB/s-46.8MB/s), io=256MiB (268MB), run=5732-5732msec 00:19:22.261 ----------------------------------------------------- 00:19:22.261 Suppressions used: 00:19:22.261 count bytes template 00:19:22.261 1 5 /usr/src/fio/parse.c 00:19:22.261 2 192 /usr/src/fio/iolog.c 00:19:22.261 1 8 libtcmalloc_minimal.so 00:19:22.261 1 904 libcrypto.so 00:19:22.261 ----------------------------------------------------- 00:19:22.261 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.261 Remove shared memory files 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57134 /dev/shm/spdk_tgt_trace.pid74048 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:22.261 ************************************ 00:19:22.261 END TEST ftl_fio_basic 00:19:22.261 ************************************ 00:19:22.261 00:19:22.261 real 1m3.663s 00:19:22.261 user 2m17.266s 00:19:22.261 sys 0m2.982s 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.261 03:03:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:22.261 03:03:53 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:22.261 03:03:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:22.261 03:03:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.261 03:03:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:22.261 ************************************ 00:19:22.261 START TEST ftl_bdevperf 00:19:22.261 ************************************ 00:19:22.261 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:22.525 * Looking for test storage... 00:19:22.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.525 --rc genhtml_branch_coverage=1 00:19:22.525 --rc genhtml_function_coverage=1 00:19:22.525 --rc genhtml_legend=1 00:19:22.525 --rc geninfo_all_blocks=1 00:19:22.525 --rc geninfo_unexecuted_blocks=1 00:19:22.525 00:19:22.525 ' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.525 --rc genhtml_branch_coverage=1 00:19:22.525 --rc genhtml_function_coverage=1 00:19:22.525 --rc genhtml_legend=1 00:19:22.525 --rc geninfo_all_blocks=1 00:19:22.525 --rc geninfo_unexecuted_blocks=1 00:19:22.525 00:19:22.525 ' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.525 --rc genhtml_branch_coverage=1 00:19:22.525 --rc genhtml_function_coverage=1 00:19:22.525 --rc genhtml_legend=1 00:19:22.525 --rc geninfo_all_blocks=1 00:19:22.525 --rc geninfo_unexecuted_blocks=1 00:19:22.525 00:19:22.525 ' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.525 --rc genhtml_branch_coverage=1 00:19:22.525 --rc genhtml_function_coverage=1 00:19:22.525 --rc genhtml_legend=1 00:19:22.525 --rc geninfo_all_blocks=1 00:19:22.525 --rc geninfo_unexecuted_blocks=1 00:19:22.525 00:19:22.525 ' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75961 00:19:22.525 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75961 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75961 ']' 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.526 03:03:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:22.526 [2024-12-05 03:03:53.308173] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:22.526 [2024-12-05 03:03:53.308569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75961 ] 00:19:22.826 [2024-12-05 03:03:53.471329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.826 [2024-12-05 03:03:53.617992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.400 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.400 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:23.400 03:03:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:23.400 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:23.401 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:23.401 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:23.401 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:23.401 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:23.662 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.925 { 00:19:23.925 "name": "nvme0n1", 00:19:23.925 "aliases": [ 00:19:23.925 "a85412fb-ddcc-4684-a2c6-bc2c700623e5" 00:19:23.925 ], 00:19:23.925 "product_name": "NVMe disk", 00:19:23.925 "block_size": 4096, 00:19:23.925 "num_blocks": 1310720, 00:19:23.925 "uuid": "a85412fb-ddcc-4684-a2c6-bc2c700623e5", 00:19:23.925 "numa_id": -1, 00:19:23.925 "assigned_rate_limits": { 00:19:23.925 "rw_ios_per_sec": 0, 00:19:23.925 "rw_mbytes_per_sec": 0, 00:19:23.925 "r_mbytes_per_sec": 0, 00:19:23.925 "w_mbytes_per_sec": 0 00:19:23.925 }, 00:19:23.925 "claimed": true, 00:19:23.925 "claim_type": "read_many_write_one", 00:19:23.925 "zoned": false, 00:19:23.925 "supported_io_types": { 00:19:23.925 "read": true, 00:19:23.925 "write": true, 00:19:23.925 "unmap": true, 00:19:23.925 "flush": true, 00:19:23.925 "reset": true, 00:19:23.925 "nvme_admin": true, 00:19:23.925 "nvme_io": true, 00:19:23.925 "nvme_io_md": false, 00:19:23.925 "write_zeroes": true, 00:19:23.925 "zcopy": false, 00:19:23.925 "get_zone_info": false, 00:19:23.925 "zone_management": false, 00:19:23.925 "zone_append": false, 00:19:23.925 "compare": true, 00:19:23.925 "compare_and_write": false, 00:19:23.925 "abort": true, 00:19:23.925 "seek_hole": false, 00:19:23.925 "seek_data": false, 00:19:23.925 "copy": true, 00:19:23.925 "nvme_iov_md": false 00:19:23.925 }, 00:19:23.925 "driver_specific": { 00:19:23.925 "nvme": [ 00:19:23.925 { 00:19:23.925 "pci_address": "0000:00:11.0", 00:19:23.925 "trid": { 00:19:23.925 "trtype": "PCIe", 00:19:23.925 "traddr": "0000:00:11.0" 00:19:23.925 }, 00:19:23.925 "ctrlr_data": { 00:19:23.925 "cntlid": 0, 00:19:23.925 "vendor_id": "0x1b36", 00:19:23.925 "model_number": "QEMU NVMe Ctrl", 00:19:23.925 "serial_number": "12341", 00:19:23.925 "firmware_revision": "8.0.0", 00:19:23.925 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:23.925 "oacs": { 00:19:23.925 "security": 0, 00:19:23.925 "format": 1, 00:19:23.925 "firmware": 0, 00:19:23.925 "ns_manage": 1 00:19:23.925 }, 00:19:23.925 "multi_ctrlr": false, 00:19:23.925 "ana_reporting": false 00:19:23.925 }, 00:19:23.925 "vs": { 00:19:23.925 "nvme_version": "1.4" 00:19:23.925 }, 00:19:23.925 "ns_data": { 00:19:23.925 "id": 1, 00:19:23.925 "can_share": false 00:19:23.925 } 00:19:23.925 } 00:19:23.925 ], 00:19:23.925 "mp_policy": "active_passive" 00:19:23.925 } 00:19:23.925 } 00:19:23.925 ]' 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:23.925 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:24.185 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fe5a3a91-5133-4603-bdd2-df82d9c0f1f3 00:19:24.185 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:24.185 03:03:54 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe5a3a91-5133-4603-bdd2-df82d9c0f1f3 00:19:24.443 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:24.701 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b 00:19:24.701 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:24.959 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.217 { 00:19:25.217 "name": "3bc1d173-5184-4679-be3b-0ac7f717e906", 00:19:25.217 "aliases": [ 00:19:25.217 "lvs/nvme0n1p0" 00:19:25.217 ], 00:19:25.217 "product_name": "Logical Volume", 00:19:25.217 "block_size": 4096, 00:19:25.217 "num_blocks": 26476544, 00:19:25.217 "uuid": "3bc1d173-5184-4679-be3b-0ac7f717e906", 00:19:25.217 "assigned_rate_limits": { 00:19:25.217 "rw_ios_per_sec": 0, 00:19:25.217 "rw_mbytes_per_sec": 0, 00:19:25.217 "r_mbytes_per_sec": 0, 00:19:25.217 "w_mbytes_per_sec": 0 00:19:25.217 }, 00:19:25.217 "claimed": false, 00:19:25.217 "zoned": false, 00:19:25.217 "supported_io_types": { 00:19:25.217 "read": true, 00:19:25.217 "write": true, 00:19:25.217 "unmap": true, 00:19:25.217 "flush": false, 00:19:25.217 "reset": true, 00:19:25.217 "nvme_admin": false, 00:19:25.217 "nvme_io": false, 00:19:25.217 "nvme_io_md": false, 00:19:25.217 "write_zeroes": true, 00:19:25.217 "zcopy": false, 00:19:25.217 "get_zone_info": false, 00:19:25.217 "zone_management": false, 00:19:25.217 "zone_append": false, 00:19:25.217 "compare": false, 00:19:25.217 "compare_and_write": false, 00:19:25.217 "abort": false, 00:19:25.217 "seek_hole": true, 00:19:25.217 "seek_data": true, 00:19:25.217 "copy": false, 00:19:25.217 "nvme_iov_md": false 00:19:25.217 }, 00:19:25.217 "driver_specific": { 00:19:25.217 "lvol": { 00:19:25.217 "lvol_store_uuid": "c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b", 00:19:25.217 "base_bdev": "nvme0n1", 00:19:25.217 "thin_provision": true, 00:19:25.217 "num_allocated_clusters": 0, 00:19:25.217 "snapshot": false, 00:19:25.217 "clone": false, 00:19:25.217 "esnap_clone": false 00:19:25.217 } 00:19:25.217 } 00:19:25.217 } 00:19:25.217 ]' 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:25.217 03:03:55 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:25.475 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.733 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.733 { 00:19:25.733 "name": "3bc1d173-5184-4679-be3b-0ac7f717e906", 00:19:25.733 "aliases": [ 00:19:25.733 "lvs/nvme0n1p0" 00:19:25.733 ], 00:19:25.733 "product_name": "Logical Volume", 00:19:25.733 "block_size": 4096, 00:19:25.733 "num_blocks": 26476544, 00:19:25.733 "uuid": "3bc1d173-5184-4679-be3b-0ac7f717e906", 00:19:25.733 "assigned_rate_limits": { 00:19:25.733 "rw_ios_per_sec": 0, 00:19:25.733 "rw_mbytes_per_sec": 0, 00:19:25.733 "r_mbytes_per_sec": 0, 00:19:25.733 "w_mbytes_per_sec": 0 00:19:25.733 }, 00:19:25.733 "claimed": false, 00:19:25.733 "zoned": false, 00:19:25.733 "supported_io_types": { 00:19:25.733 "read": true, 00:19:25.733 "write": true, 00:19:25.733 "unmap": true, 00:19:25.733 "flush": false, 00:19:25.733 "reset": true, 00:19:25.733 "nvme_admin": false, 00:19:25.733 "nvme_io": false, 00:19:25.733 "nvme_io_md": false, 00:19:25.733 "write_zeroes": true, 00:19:25.733 "zcopy": false, 00:19:25.733 "get_zone_info": false, 00:19:25.733 "zone_management": false, 00:19:25.733 "zone_append": false, 00:19:25.733 "compare": false, 00:19:25.733 "compare_and_write": false, 00:19:25.733 "abort": false, 00:19:25.733 "seek_hole": true, 00:19:25.733 "seek_data": true, 00:19:25.733 "copy": false, 00:19:25.733 "nvme_iov_md": false 00:19:25.733 }, 00:19:25.733 "driver_specific": { 00:19:25.733 "lvol": { 00:19:25.733 "lvol_store_uuid": "c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b", 00:19:25.733 "base_bdev": "nvme0n1", 00:19:25.733 "thin_provision": true, 00:19:25.733 "num_allocated_clusters": 0, 00:19:25.733 "snapshot": false, 00:19:25.734 "clone": false, 00:19:25.734 "esnap_clone": false 00:19:25.734 } 00:19:25.734 } 00:19:25.734 } 00:19:25.734 ]' 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:25.734 03:03:56 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3bc1d173-5184-4679-be3b-0ac7f717e906 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.995 { 00:19:25.995 "name": "3bc1d173-5184-4679-be3b-0ac7f717e906", 00:19:25.995 "aliases": [ 00:19:25.995 "lvs/nvme0n1p0" 00:19:25.995 ], 00:19:25.995 "product_name": "Logical Volume", 00:19:25.995 "block_size": 4096, 00:19:25.995 "num_blocks": 26476544, 00:19:25.995 "uuid": "3bc1d173-5184-4679-be3b-0ac7f717e906", 00:19:25.995 "assigned_rate_limits": { 00:19:25.995 "rw_ios_per_sec": 0, 00:19:25.995 "rw_mbytes_per_sec": 0, 00:19:25.995 "r_mbytes_per_sec": 0, 00:19:25.995 "w_mbytes_per_sec": 0 00:19:25.995 }, 00:19:25.995 "claimed": false, 00:19:25.995 "zoned": false, 00:19:25.995 "supported_io_types": { 00:19:25.995 "read": true, 00:19:25.995 "write": true, 00:19:25.995 "unmap": true, 00:19:25.995 "flush": false, 00:19:25.995 "reset": true, 00:19:25.995 "nvme_admin": false, 00:19:25.995 "nvme_io": false, 00:19:25.995 "nvme_io_md": false, 00:19:25.995 "write_zeroes": true, 00:19:25.995 "zcopy": false, 00:19:25.995 "get_zone_info": false, 00:19:25.995 "zone_management": false, 00:19:25.995 "zone_append": false, 00:19:25.995 "compare": false, 00:19:25.995 "compare_and_write": false, 00:19:25.995 "abort": false, 00:19:25.995 "seek_hole": true, 00:19:25.995 "seek_data": true, 00:19:25.995 "copy": false, 00:19:25.995 "nvme_iov_md": false 00:19:25.995 }, 00:19:25.995 "driver_specific": { 00:19:25.995 "lvol": { 00:19:25.995 "lvol_store_uuid": "c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b", 00:19:25.995 "base_bdev": "nvme0n1", 00:19:25.995 "thin_provision": true, 00:19:25.995 "num_allocated_clusters": 0, 00:19:25.995 "snapshot": false, 00:19:25.995 "clone": false, 00:19:25.995 "esnap_clone": false 00:19:25.995 } 00:19:25.995 } 00:19:25.995 } 00:19:25.995 ]' 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.995 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:26.256 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:26.256 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:26.256 03:03:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:26.256 03:03:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:26.256 03:03:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3bc1d173-5184-4679-be3b-0ac7f717e906 -c nvc0n1p0 --l2p_dram_limit 20 00:19:26.256 [2024-12-05 03:03:57.041133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.256 [2024-12-05 03:03:57.041185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:26.256 [2024-12-05 03:03:57.041199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.256 [2024-12-05 03:03:57.041210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.256 [2024-12-05 03:03:57.041268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.256 [2024-12-05 03:03:57.041280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.256 [2024-12-05 03:03:57.041289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:26.256 [2024-12-05 03:03:57.041298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.256 [2024-12-05 03:03:57.041315] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:26.256 [2024-12-05 03:03:57.042446] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:26.256 [2024-12-05 03:03:57.042484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.256 [2024-12-05 03:03:57.042496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.256 [2024-12-05 03:03:57.042505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:19:26.256 [2024-12-05 03:03:57.042515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.256 [2024-12-05 03:03:57.042804] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fc2043bd-b2d2-4df7-9666-67466cf30f4f 00:19:26.257 [2024-12-05 03:03:57.043961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.043996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:26.257 [2024-12-05 03:03:57.044013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:26.257 [2024-12-05 03:03:57.044020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.049636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.049667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.257 [2024-12-05 03:03:57.049679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.578 ms 00:19:26.257 [2024-12-05 03:03:57.049688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.049772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.049782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.257 [2024-12-05 03:03:57.049901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:26.257 [2024-12-05 03:03:57.049909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.049959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.049969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:26.257 [2024-12-05 03:03:57.049979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:26.257 [2024-12-05 03:03:57.049986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.050010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.257 [2024-12-05 03:03:57.053664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.053694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.257 [2024-12-05 03:03:57.053704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:19:26.257 [2024-12-05 03:03:57.053714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.053745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.053754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:26.257 [2024-12-05 03:03:57.053762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:26.257 [2024-12-05 03:03:57.053771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.053804] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:26.257 [2024-12-05 03:03:57.053956] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:26.257 [2024-12-05 03:03:57.053967] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:26.257 [2024-12-05 03:03:57.053979] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:26.257 [2024-12-05 03:03:57.053989] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054008] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:26.257 [2024-12-05 03:03:57.054017] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:26.257 [2024-12-05 03:03:57.054024] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:26.257 [2024-12-05 03:03:57.054032] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:26.257 [2024-12-05 03:03:57.054042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.054050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:26.257 [2024-12-05 03:03:57.054058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:26.257 [2024-12-05 03:03:57.054066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.054166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.257 [2024-12-05 03:03:57.054176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:26.257 [2024-12-05 03:03:57.054183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:26.257 [2024-12-05 03:03:57.054194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.257 [2024-12-05 03:03:57.054282] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:26.257 [2024-12-05 03:03:57.054296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:26.257 [2024-12-05 03:03:57.054304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:26.257 [2024-12-05 03:03:57.054328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:26.257 [2024-12-05 03:03:57.054350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.257 [2024-12-05 03:03:57.054366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:26.257 [2024-12-05 03:03:57.054381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:26.257 [2024-12-05 03:03:57.054387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.257 [2024-12-05 03:03:57.054396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:26.257 [2024-12-05 03:03:57.054402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:26.257 [2024-12-05 03:03:57.054412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:26.257 [2024-12-05 03:03:57.054426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:26.257 [2024-12-05 03:03:57.054447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:26.257 [2024-12-05 03:03:57.054469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:26.257 [2024-12-05 03:03:57.054489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:26.257 [2024-12-05 03:03:57.054512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:26.257 [2024-12-05 03:03:57.054534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.257 [2024-12-05 03:03:57.054550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:26.257 [2024-12-05 03:03:57.054558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:26.257 [2024-12-05 03:03:57.054564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.257 [2024-12-05 03:03:57.054572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:26.257 [2024-12-05 03:03:57.054579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:26.257 [2024-12-05 03:03:57.054587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:26.257 [2024-12-05 03:03:57.054603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:26.257 [2024-12-05 03:03:57.054610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:26.257 [2024-12-05 03:03:57.054626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:26.257 [2024-12-05 03:03:57.054634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.257 [2024-12-05 03:03:57.054641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.257 [2024-12-05 03:03:57.054652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:26.257 [2024-12-05 03:03:57.054658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:26.257 [2024-12-05 03:03:57.054666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:26.257 [2024-12-05 03:03:57.054673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:26.257 [2024-12-05 03:03:57.054681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:26.257 [2024-12-05 03:03:57.054688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:26.258 [2024-12-05 03:03:57.054698] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:26.258 [2024-12-05 03:03:57.054707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:26.258 [2024-12-05 03:03:57.054724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:26.258 [2024-12-05 03:03:57.054733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:26.258 [2024-12-05 03:03:57.054740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:26.258 [2024-12-05 03:03:57.054749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:26.258 [2024-12-05 03:03:57.054756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:26.258 [2024-12-05 03:03:57.054766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:26.258 [2024-12-05 03:03:57.054773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:26.258 [2024-12-05 03:03:57.054783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:26.258 [2024-12-05 03:03:57.054790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:26.258 [2024-12-05 03:03:57.054830] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:26.258 [2024-12-05 03:03:57.054838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:26.258 [2024-12-05 03:03:57.054856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:26.258 [2024-12-05 03:03:57.054866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:26.258 [2024-12-05 03:03:57.054873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:26.258 [2024-12-05 03:03:57.054882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.258 [2024-12-05 03:03:57.054890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:26.258 [2024-12-05 03:03:57.054899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:19:26.258 [2024-12-05 03:03:57.054905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.258 [2024-12-05 03:03:57.054939] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:26.258 [2024-12-05 03:03:57.054955] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:30.463 [2024-12-05 03:04:00.863330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.463 [2024-12-05 03:04:00.863568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:30.463 [2024-12-05 03:04:00.863593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3808.373 ms 00:19:30.463 [2024-12-05 03:04:00.863602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.889464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.889503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.464 [2024-12-05 03:04:00.889516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.660 ms 00:19:30.464 [2024-12-05 03:04:00.889523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.889644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.889654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.464 [2024-12-05 03:04:00.889666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:30.464 [2024-12-05 03:04:00.889674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.930484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.930635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.464 [2024-12-05 03:04:00.930659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.777 ms 00:19:30.464 [2024-12-05 03:04:00.930669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.930708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.930717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.464 [2024-12-05 03:04:00.930727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.464 [2024-12-05 03:04:00.930736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.931153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.931171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.464 [2024-12-05 03:04:00.931182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:19:30.464 [2024-12-05 03:04:00.931190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.931296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.931304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.464 [2024-12-05 03:04:00.931316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:30.464 [2024-12-05 03:04:00.931324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.944505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.944536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.464 [2024-12-05 03:04:00.944548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.163 ms 00:19:30.464 [2024-12-05 03:04:00.944563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:00.956098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:30.464 [2024-12-05 03:04:00.961677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:00.961816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.464 [2024-12-05 03:04:00.961832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.052 ms 00:19:30.464 [2024-12-05 03:04:00.961841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.042102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.042149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:30.464 [2024-12-05 03:04:01.042163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.238 ms 00:19:30.464 [2024-12-05 03:04:01.042174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.042333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.042348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.464 [2024-12-05 03:04:01.042357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:30.464 [2024-12-05 03:04:01.042369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.067304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.067345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:30.464 [2024-12-05 03:04:01.067356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.894 ms 00:19:30.464 [2024-12-05 03:04:01.067367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.091721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.091763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:30.464 [2024-12-05 03:04:01.091775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:19:30.464 [2024-12-05 03:04:01.091785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.092393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.092413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.464 [2024-12-05 03:04:01.092423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:19:30.464 [2024-12-05 03:04:01.092573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.179543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.179746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:30.464 [2024-12-05 03:04:01.179768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.929 ms 00:19:30.464 [2024-12-05 03:04:01.179779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.207776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.207834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:30.464 [2024-12-05 03:04:01.207852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.911 ms 00:19:30.464 [2024-12-05 03:04:01.207862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.235294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.235350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:30.464 [2024-12-05 03:04:01.235363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.380 ms 00:19:30.464 [2024-12-05 03:04:01.235373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.262249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.262447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.464 [2024-12-05 03:04:01.262469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.825 ms 00:19:30.464 [2024-12-05 03:04:01.262479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.262525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.262540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.464 [2024-12-05 03:04:01.262550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:30.464 [2024-12-05 03:04:01.262560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.262666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.464 [2024-12-05 03:04:01.262680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.464 [2024-12-05 03:04:01.262689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:30.464 [2024-12-05 03:04:01.262698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.464 [2024-12-05 03:04:01.263864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4222.222 ms, result 0 00:19:30.464 { 00:19:30.464 "name": "ftl0", 00:19:30.464 "uuid": "fc2043bd-b2d2-4df7-9666-67466cf30f4f" 00:19:30.464 } 00:19:30.464 03:04:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:30.464 03:04:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:30.464 03:04:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:30.724 03:04:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:30.984 [2024-12-05 03:04:01.604054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:30.984 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:30.984 Zero copy mechanism will not be used. 00:19:30.984 Running I/O for 4 seconds... 00:19:32.871 1173.00 IOPS, 77.89 MiB/s [2024-12-05T03:04:04.657Z] 1580.00 IOPS, 104.92 MiB/s [2024-12-05T03:04:06.040Z] 1773.67 IOPS, 117.78 MiB/s [2024-12-05T03:04:06.040Z] 1798.75 IOPS, 119.45 MiB/s 00:19:35.196 Latency(us) 00:19:35.196 [2024-12-05T03:04:06.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.196 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:35.196 ftl0 : 4.00 1798.40 119.43 0.00 0.00 582.01 156.75 3579.27 00:19:35.196 [2024-12-05T03:04:06.040Z] =================================================================================================================== 00:19:35.196 [2024-12-05T03:04:06.040Z] Total : 1798.40 119.43 0.00 0.00 582.01 156.75 3579.27 00:19:35.196 [2024-12-05 03:04:05.614097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:35.196 { 00:19:35.196 "results": [ 00:19:35.196 { 00:19:35.196 "job": "ftl0", 00:19:35.196 "core_mask": "0x1", 00:19:35.196 "workload": "randwrite", 00:19:35.196 "status": "finished", 00:19:35.196 "queue_depth": 1, 00:19:35.196 "io_size": 69632, 00:19:35.196 "runtime": 4.001324, 00:19:35.196 "iops": 1798.4047280350203, 00:19:35.196 "mibps": 119.42531397107557, 00:19:35.196 "io_failed": 0, 00:19:35.196 "io_timeout": 0, 00:19:35.196 "avg_latency_us": 582.010970197118, 00:19:35.196 "min_latency_us": 156.75076923076924, 00:19:35.196 "max_latency_us": 3579.273846153846 00:19:35.196 } 00:19:35.196 ], 00:19:35.196 "core_count": 1 00:19:35.196 } 00:19:35.196 03:04:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:35.196 [2024-12-05 03:04:05.710714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:35.196 Running I/O for 4 seconds... 00:19:37.083 7501.00 IOPS, 29.30 MiB/s [2024-12-05T03:04:08.871Z] 7745.50 IOPS, 30.26 MiB/s [2024-12-05T03:04:09.818Z] 7053.67 IOPS, 27.55 MiB/s [2024-12-05T03:04:09.818Z] 6443.00 IOPS, 25.17 MiB/s 00:19:38.974 Latency(us) 00:19:38.974 [2024-12-05T03:04:09.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.974 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.974 ftl0 : 4.04 6418.21 25.07 0.00 0.00 19861.44 297.75 54848.59 00:19:38.974 [2024-12-05T03:04:09.818Z] =================================================================================================================== 00:19:38.974 [2024-12-05T03:04:09.818Z] Total : 6418.21 25.07 0.00 0.00 19861.44 0.00 54848.59 00:19:38.974 [2024-12-05 03:04:09.754220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:38.974 { 00:19:38.974 "results": [ 00:19:38.974 { 00:19:38.974 "job": "ftl0", 00:19:38.974 "core_mask": "0x1", 00:19:38.974 "workload": "randwrite", 00:19:38.974 "status": "finished", 00:19:38.974 "queue_depth": 128, 00:19:38.974 "io_size": 4096, 00:19:38.974 "runtime": 4.035083, 00:19:38.974 "iops": 6418.207506512257, 00:19:38.974 "mibps": 25.071123072313505, 00:19:38.974 "io_failed": 0, 00:19:38.974 "io_timeout": 0, 00:19:38.974 "avg_latency_us": 19861.443826609717, 00:19:38.974 "min_latency_us": 297.7476923076923, 00:19:38.974 "max_latency_us": 54848.59076923077 00:19:38.974 } 00:19:38.974 ], 00:19:38.974 "core_count": 1 00:19:38.974 } 00:19:38.974 03:04:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:39.235 [2024-12-05 03:04:09.875708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:39.235 Running I/O for 4 seconds... 00:19:41.127 4569.00 IOPS, 17.85 MiB/s [2024-12-05T03:04:12.912Z] 4555.00 IOPS, 17.79 MiB/s [2024-12-05T03:04:14.297Z] 4864.33 IOPS, 19.00 MiB/s [2024-12-05T03:04:14.297Z] 5232.00 IOPS, 20.44 MiB/s 00:19:43.453 Latency(us) 00:19:43.453 [2024-12-05T03:04:14.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.453 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:43.453 Verification LBA range: start 0x0 length 0x1400000 00:19:43.453 ftl0 : 4.02 5240.26 20.47 0.00 0.00 24345.52 234.73 102034.51 00:19:43.453 [2024-12-05T03:04:14.297Z] =================================================================================================================== 00:19:43.453 [2024-12-05T03:04:14.297Z] Total : 5240.26 20.47 0.00 0.00 24345.52 0.00 102034.51 00:19:43.453 [2024-12-05 03:04:13.910695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:43.453 { 00:19:43.453 "results": [ 00:19:43.453 { 00:19:43.453 "job": "ftl0", 00:19:43.453 "core_mask": "0x1", 00:19:43.453 "workload": "verify", 00:19:43.453 "status": "finished", 00:19:43.453 "verify_range": { 00:19:43.453 "start": 0, 00:19:43.453 "length": 20971520 00:19:43.453 }, 00:19:43.453 "queue_depth": 128, 00:19:43.453 "io_size": 4096, 00:19:43.453 "runtime": 4.018124, 00:19:43.453 "iops": 5240.256398259487, 00:19:43.453 "mibps": 20.46975155570112, 00:19:43.453 "io_failed": 0, 00:19:43.453 "io_timeout": 0, 00:19:43.453 "avg_latency_us": 24345.5200187047, 00:19:43.453 "min_latency_us": 234.7323076923077, 00:19:43.453 "max_latency_us": 102034.51076923076 00:19:43.453 } 00:19:43.453 ], 00:19:43.453 "core_count": 1 00:19:43.453 } 00:19:43.453 03:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:43.453 [2024-12-05 03:04:14.073540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.453 [2024-12-05 03:04:14.073709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:43.453 [2024-12-05 03:04:14.073728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.453 [2024-12-05 03:04:14.073738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.453 [2024-12-05 03:04:14.073763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:43.453 [2024-12-05 03:04:14.076379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.453 [2024-12-05 03:04:14.076408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:43.453 [2024-12-05 03:04:14.076419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:19:43.453 [2024-12-05 03:04:14.076428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.453 [2024-12-05 03:04:14.079086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.453 [2024-12-05 03:04:14.079115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:43.453 [2024-12-05 03:04:14.079131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:19:43.453 [2024-12-05 03:04:14.079138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.453 [2024-12-05 03:04:14.261422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.453 [2024-12-05 03:04:14.261460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:43.453 [2024-12-05 03:04:14.261477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 182.264 ms 00:19:43.453 [2024-12-05 03:04:14.261484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.453 [2024-12-05 03:04:14.267654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.453 [2024-12-05 03:04:14.267776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:43.453 [2024-12-05 03:04:14.267797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.137 ms 00:19:43.453 [2024-12-05 03:04:14.267807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.453 [2024-12-05 03:04:14.292092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.453 [2024-12-05 03:04:14.292124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:43.453 [2024-12-05 03:04:14.292137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.228 ms 00:19:43.453 [2024-12-05 03:04:14.292144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.714 [2024-12-05 03:04:14.307581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.715 [2024-12-05 03:04:14.307617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:43.715 [2024-12-05 03:04:14.307630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.401 ms 00:19:43.715 [2024-12-05 03:04:14.307638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.715 [2024-12-05 03:04:14.307774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.715 [2024-12-05 03:04:14.307786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:43.715 [2024-12-05 03:04:14.307798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:43.715 [2024-12-05 03:04:14.307805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.715 [2024-12-05 03:04:14.332005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.715 [2024-12-05 03:04:14.332036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:43.715 [2024-12-05 03:04:14.332048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.182 ms 00:19:43.715 [2024-12-05 03:04:14.332056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.715 [2024-12-05 03:04:14.356024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.715 [2024-12-05 03:04:14.356056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:43.715 [2024-12-05 03:04:14.356068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.920 ms 00:19:43.715 [2024-12-05 03:04:14.356088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.715 [2024-12-05 03:04:14.379374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.715 [2024-12-05 03:04:14.379407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:43.715 [2024-12-05 03:04:14.379419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.250 ms 00:19:43.715 [2024-12-05 03:04:14.379426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.715 [2024-12-05 03:04:14.402889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.715 [2024-12-05 03:04:14.403017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:43.715 [2024-12-05 03:04:14.403039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.396 ms 00:19:43.715 [2024-12-05 03:04:14.403046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.715 [2024-12-05 03:04:14.403092] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:43.715 [2024-12-05 03:04:14.403107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:43.715 [2024-12-05 03:04:14.403518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:43.716 [2024-12-05 03:04:14.403961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:43.716 [2024-12-05 03:04:14.403970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fc2043bd-b2d2-4df7-9666-67466cf30f4f 00:19:43.716 [2024-12-05 03:04:14.403980] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:43.716 [2024-12-05 03:04:14.403989] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:43.716 [2024-12-05 03:04:14.403996] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:43.716 [2024-12-05 03:04:14.404005] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:43.716 [2024-12-05 03:04:14.404012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:43.716 [2024-12-05 03:04:14.404021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:43.716 [2024-12-05 03:04:14.404028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:43.716 [2024-12-05 03:04:14.404037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:43.716 [2024-12-05 03:04:14.404044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:43.716 [2024-12-05 03:04:14.404052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.716 [2024-12-05 03:04:14.404059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:43.716 [2024-12-05 03:04:14.404069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:19:43.716 [2024-12-05 03:04:14.404086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.716 [2024-12-05 03:04:14.416771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.716 [2024-12-05 03:04:14.416800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.717 [2024-12-05 03:04:14.416813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.653 ms 00:19:43.717 [2024-12-05 03:04:14.416820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.717 [2024-12-05 03:04:14.417193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.717 [2024-12-05 03:04:14.417203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.717 [2024-12-05 03:04:14.417226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:19:43.717 [2024-12-05 03:04:14.417233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.717 [2024-12-05 03:04:14.453414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.717 [2024-12-05 03:04:14.453452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.717 [2024-12-05 03:04:14.453466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.717 [2024-12-05 03:04:14.453474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.717 [2024-12-05 03:04:14.453529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.717 [2024-12-05 03:04:14.453538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.717 [2024-12-05 03:04:14.453548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.717 [2024-12-05 03:04:14.453555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.717 [2024-12-05 03:04:14.453625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.717 [2024-12-05 03:04:14.453635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.717 [2024-12-05 03:04:14.453645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.717 [2024-12-05 03:04:14.453652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.717 [2024-12-05 03:04:14.453668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.717 [2024-12-05 03:04:14.453676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.717 [2024-12-05 03:04:14.453686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.717 [2024-12-05 03:04:14.453693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.717 [2024-12-05 03:04:14.534630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.717 [2024-12-05 03:04:14.534698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.717 [2024-12-05 03:04:14.534718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.717 [2024-12-05 03:04:14.534726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.604860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.978 [2024-12-05 03:04:14.605169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.978 [2024-12-05 03:04:14.605323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.978 [2024-12-05 03:04:14.605396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.978 [2024-12-05 03:04:14.605541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.978 [2024-12-05 03:04:14.605605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.978 [2024-12-05 03:04:14.605679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.978 [2024-12-05 03:04:14.605754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.978 [2024-12-05 03:04:14.605765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.978 [2024-12-05 03:04:14.605773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.978 [2024-12-05 03:04:14.605918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.324 ms, result 0 00:19:43.978 true 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75961 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75961 ']' 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75961 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75961 00:19:43.978 killing process with pid 75961 00:19:43.978 Received shutdown signal, test time was about 4.000000 seconds 00:19:43.978 00:19:43.978 Latency(us) 00:19:43.978 [2024-12-05T03:04:14.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.978 [2024-12-05T03:04:14.822Z] =================================================================================================================== 00:19:43.978 [2024-12-05T03:04:14.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75961' 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75961 00:19:43.978 03:04:14 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75961 00:19:44.920 Remove shared memory files 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:44.920 ************************************ 00:19:44.920 END TEST ftl_bdevperf 00:19:44.920 ************************************ 00:19:44.920 00:19:44.920 real 0m22.398s 00:19:44.920 user 0m24.947s 00:19:44.920 sys 0m0.946s 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.920 03:04:15 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.920 03:04:15 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.920 03:04:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:44.920 03:04:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.920 03:04:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.920 ************************************ 00:19:44.920 START TEST ftl_trim 00:19:44.920 ************************************ 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.920 * Looking for test storage... 00:19:44.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.920 03:04:15 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:44.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.920 --rc genhtml_branch_coverage=1 00:19:44.920 --rc genhtml_function_coverage=1 00:19:44.920 --rc genhtml_legend=1 00:19:44.920 --rc geninfo_all_blocks=1 00:19:44.920 --rc geninfo_unexecuted_blocks=1 00:19:44.920 00:19:44.920 ' 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:44.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.920 --rc genhtml_branch_coverage=1 00:19:44.920 --rc genhtml_function_coverage=1 00:19:44.920 --rc genhtml_legend=1 00:19:44.920 --rc geninfo_all_blocks=1 00:19:44.920 --rc geninfo_unexecuted_blocks=1 00:19:44.920 00:19:44.920 ' 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:44.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.920 --rc genhtml_branch_coverage=1 00:19:44.920 --rc genhtml_function_coverage=1 00:19:44.920 --rc genhtml_legend=1 00:19:44.920 --rc geninfo_all_blocks=1 00:19:44.920 --rc geninfo_unexecuted_blocks=1 00:19:44.920 00:19:44.920 ' 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:44.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.920 --rc genhtml_branch_coverage=1 00:19:44.920 --rc genhtml_function_coverage=1 00:19:44.920 --rc genhtml_legend=1 00:19:44.920 --rc geninfo_all_blocks=1 00:19:44.920 --rc geninfo_unexecuted_blocks=1 00:19:44.920 00:19:44.920 ' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76315 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76315 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76315 ']' 00:19:44.920 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.920 03:04:15 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:44.921 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.921 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.921 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.921 03:04:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:45.181 [2024-12-05 03:04:15.781112] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:45.181 [2024-12-05 03:04:15.781262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76315 ] 00:19:45.181 [2024-12-05 03:04:15.947167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:45.441 [2024-12-05 03:04:16.080435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.441 [2024-12-05 03:04:16.080793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.441 [2024-12-05 03:04:16.080878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.012 03:04:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.012 03:04:16 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:46.012 03:04:16 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:46.012 03:04:16 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:46.012 03:04:16 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:46.012 03:04:16 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:46.012 03:04:16 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:46.012 03:04:16 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:46.272 03:04:17 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:46.272 03:04:17 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:46.272 03:04:17 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:46.272 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:46.272 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.272 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:46.272 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:46.272 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:46.533 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.533 { 00:19:46.533 "name": "nvme0n1", 00:19:46.533 "aliases": [ 00:19:46.533 "28f13041-d9b8-4b14-b8d5-2926994df289" 00:19:46.533 ], 00:19:46.533 "product_name": "NVMe disk", 00:19:46.533 "block_size": 4096, 00:19:46.533 "num_blocks": 1310720, 00:19:46.533 "uuid": "28f13041-d9b8-4b14-b8d5-2926994df289", 00:19:46.533 "numa_id": -1, 00:19:46.533 "assigned_rate_limits": { 00:19:46.533 "rw_ios_per_sec": 0, 00:19:46.533 "rw_mbytes_per_sec": 0, 00:19:46.533 "r_mbytes_per_sec": 0, 00:19:46.533 "w_mbytes_per_sec": 0 00:19:46.533 }, 00:19:46.533 "claimed": true, 00:19:46.533 "claim_type": "read_many_write_one", 00:19:46.533 "zoned": false, 00:19:46.533 "supported_io_types": { 00:19:46.533 "read": true, 00:19:46.533 "write": true, 00:19:46.533 "unmap": true, 00:19:46.533 "flush": true, 00:19:46.533 "reset": true, 00:19:46.533 "nvme_admin": true, 00:19:46.533 "nvme_io": true, 00:19:46.533 "nvme_io_md": false, 00:19:46.533 "write_zeroes": true, 00:19:46.533 "zcopy": false, 00:19:46.533 "get_zone_info": false, 00:19:46.533 "zone_management": false, 00:19:46.533 "zone_append": false, 00:19:46.533 "compare": true, 00:19:46.533 "compare_and_write": false, 00:19:46.533 "abort": true, 00:19:46.533 "seek_hole": false, 00:19:46.533 "seek_data": false, 00:19:46.533 "copy": true, 00:19:46.533 "nvme_iov_md": false 00:19:46.533 }, 00:19:46.533 "driver_specific": { 00:19:46.533 "nvme": [ 00:19:46.533 { 00:19:46.533 "pci_address": "0000:00:11.0", 00:19:46.533 "trid": { 00:19:46.533 "trtype": "PCIe", 00:19:46.533 "traddr": "0000:00:11.0" 00:19:46.533 }, 00:19:46.533 "ctrlr_data": { 00:19:46.533 "cntlid": 0, 00:19:46.533 "vendor_id": "0x1b36", 00:19:46.533 "model_number": "QEMU NVMe Ctrl", 00:19:46.533 "serial_number": "12341", 00:19:46.533 "firmware_revision": "8.0.0", 00:19:46.533 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:46.533 "oacs": { 00:19:46.533 "security": 0, 00:19:46.533 "format": 1, 00:19:46.533 "firmware": 0, 00:19:46.533 "ns_manage": 1 00:19:46.533 }, 00:19:46.533 "multi_ctrlr": false, 00:19:46.533 "ana_reporting": false 00:19:46.533 }, 00:19:46.533 "vs": { 00:19:46.533 "nvme_version": "1.4" 00:19:46.533 }, 00:19:46.533 "ns_data": { 00:19:46.534 "id": 1, 00:19:46.534 "can_share": false 00:19:46.534 } 00:19:46.534 } 00:19:46.534 ], 00:19:46.534 "mp_policy": "active_passive" 00:19:46.534 } 00:19:46.534 } 00:19:46.534 ]' 00:19:46.534 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.534 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.534 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.534 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:46.534 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:46.534 03:04:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:46.534 03:04:17 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:46.534 03:04:17 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:46.534 03:04:17 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:46.534 03:04:17 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:46.534 03:04:17 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:46.795 03:04:17 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b 00:19:46.795 03:04:17 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:46.795 03:04:17 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4f89f1f-c29a-4a00-9cc7-d432cf4a2a4b 00:19:47.055 03:04:17 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:47.316 03:04:18 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d4e8b217-9877-40b2-856c-fca636b4ce6e 00:19:47.316 03:04:18 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d4e8b217-9877-40b2-856c-fca636b4ce6e 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=131ea324-8467-4149-9cd8-d2cd175545f7 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=131ea324-8467-4149-9cd8-d2cd175545f7 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:47.577 03:04:18 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:47.577 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=131ea324-8467-4149-9cd8-d2cd175545f7 00:19:47.577 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:47.577 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:47.577 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:47.577 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.841 { 00:19:47.841 "name": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:47.841 "aliases": [ 00:19:47.841 "lvs/nvme0n1p0" 00:19:47.841 ], 00:19:47.841 "product_name": "Logical Volume", 00:19:47.841 "block_size": 4096, 00:19:47.841 "num_blocks": 26476544, 00:19:47.841 "uuid": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:47.841 "assigned_rate_limits": { 00:19:47.841 "rw_ios_per_sec": 0, 00:19:47.841 "rw_mbytes_per_sec": 0, 00:19:47.841 "r_mbytes_per_sec": 0, 00:19:47.841 "w_mbytes_per_sec": 0 00:19:47.841 }, 00:19:47.841 "claimed": false, 00:19:47.841 "zoned": false, 00:19:47.841 "supported_io_types": { 00:19:47.841 "read": true, 00:19:47.841 "write": true, 00:19:47.841 "unmap": true, 00:19:47.841 "flush": false, 00:19:47.841 "reset": true, 00:19:47.841 "nvme_admin": false, 00:19:47.841 "nvme_io": false, 00:19:47.841 "nvme_io_md": false, 00:19:47.841 "write_zeroes": true, 00:19:47.841 "zcopy": false, 00:19:47.841 "get_zone_info": false, 00:19:47.841 "zone_management": false, 00:19:47.841 "zone_append": false, 00:19:47.841 "compare": false, 00:19:47.841 "compare_and_write": false, 00:19:47.841 "abort": false, 00:19:47.841 "seek_hole": true, 00:19:47.841 "seek_data": true, 00:19:47.841 "copy": false, 00:19:47.841 "nvme_iov_md": false 00:19:47.841 }, 00:19:47.841 "driver_specific": { 00:19:47.841 "lvol": { 00:19:47.841 "lvol_store_uuid": "d4e8b217-9877-40b2-856c-fca636b4ce6e", 00:19:47.841 "base_bdev": "nvme0n1", 00:19:47.841 "thin_provision": true, 00:19:47.841 "num_allocated_clusters": 0, 00:19:47.841 "snapshot": false, 00:19:47.841 "clone": false, 00:19:47.841 "esnap_clone": false 00:19:47.841 } 00:19:47.841 } 00:19:47.841 } 00:19:47.841 ]' 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.841 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.841 03:04:18 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:47.841 03:04:18 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:47.841 03:04:18 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:48.163 03:04:18 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:48.163 03:04:18 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:48.163 03:04:18 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:48.163 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=131ea324-8467-4149-9cd8-d2cd175545f7 00:19:48.163 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:48.163 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:48.163 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:48.163 03:04:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:48.444 { 00:19:48.444 "name": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:48.444 "aliases": [ 00:19:48.444 "lvs/nvme0n1p0" 00:19:48.444 ], 00:19:48.444 "product_name": "Logical Volume", 00:19:48.444 "block_size": 4096, 00:19:48.444 "num_blocks": 26476544, 00:19:48.444 "uuid": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:48.444 "assigned_rate_limits": { 00:19:48.444 "rw_ios_per_sec": 0, 00:19:48.444 "rw_mbytes_per_sec": 0, 00:19:48.444 "r_mbytes_per_sec": 0, 00:19:48.444 "w_mbytes_per_sec": 0 00:19:48.444 }, 00:19:48.444 "claimed": false, 00:19:48.444 "zoned": false, 00:19:48.444 "supported_io_types": { 00:19:48.444 "read": true, 00:19:48.444 "write": true, 00:19:48.444 "unmap": true, 00:19:48.444 "flush": false, 00:19:48.444 "reset": true, 00:19:48.444 "nvme_admin": false, 00:19:48.444 "nvme_io": false, 00:19:48.444 "nvme_io_md": false, 00:19:48.444 "write_zeroes": true, 00:19:48.444 "zcopy": false, 00:19:48.444 "get_zone_info": false, 00:19:48.444 "zone_management": false, 00:19:48.444 "zone_append": false, 00:19:48.444 "compare": false, 00:19:48.444 "compare_and_write": false, 00:19:48.444 "abort": false, 00:19:48.444 "seek_hole": true, 00:19:48.444 "seek_data": true, 00:19:48.444 "copy": false, 00:19:48.444 "nvme_iov_md": false 00:19:48.444 }, 00:19:48.444 "driver_specific": { 00:19:48.444 "lvol": { 00:19:48.444 "lvol_store_uuid": "d4e8b217-9877-40b2-856c-fca636b4ce6e", 00:19:48.444 "base_bdev": "nvme0n1", 00:19:48.444 "thin_provision": true, 00:19:48.444 "num_allocated_clusters": 0, 00:19:48.444 "snapshot": false, 00:19:48.444 "clone": false, 00:19:48.444 "esnap_clone": false 00:19:48.444 } 00:19:48.444 } 00:19:48.444 } 00:19:48.444 ]' 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:48.444 03:04:19 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:48.444 03:04:19 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:48.444 03:04:19 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:48.444 03:04:19 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:48.444 03:04:19 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=131ea324-8467-4149-9cd8-d2cd175545f7 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:48.444 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 131ea324-8467-4149-9cd8-d2cd175545f7 00:19:48.705 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:48.705 { 00:19:48.705 "name": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:48.705 "aliases": [ 00:19:48.705 "lvs/nvme0n1p0" 00:19:48.705 ], 00:19:48.705 "product_name": "Logical Volume", 00:19:48.705 "block_size": 4096, 00:19:48.705 "num_blocks": 26476544, 00:19:48.705 "uuid": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:48.705 "assigned_rate_limits": { 00:19:48.705 "rw_ios_per_sec": 0, 00:19:48.705 "rw_mbytes_per_sec": 0, 00:19:48.705 "r_mbytes_per_sec": 0, 00:19:48.705 "w_mbytes_per_sec": 0 00:19:48.705 }, 00:19:48.705 "claimed": false, 00:19:48.705 "zoned": false, 00:19:48.705 "supported_io_types": { 00:19:48.705 "read": true, 00:19:48.705 "write": true, 00:19:48.705 "unmap": true, 00:19:48.705 "flush": false, 00:19:48.705 "reset": true, 00:19:48.705 "nvme_admin": false, 00:19:48.705 "nvme_io": false, 00:19:48.705 "nvme_io_md": false, 00:19:48.705 "write_zeroes": true, 00:19:48.705 "zcopy": false, 00:19:48.705 "get_zone_info": false, 00:19:48.705 "zone_management": false, 00:19:48.705 "zone_append": false, 00:19:48.705 "compare": false, 00:19:48.705 "compare_and_write": false, 00:19:48.705 "abort": false, 00:19:48.705 "seek_hole": true, 00:19:48.705 "seek_data": true, 00:19:48.705 "copy": false, 00:19:48.705 "nvme_iov_md": false 00:19:48.705 }, 00:19:48.705 "driver_specific": { 00:19:48.705 "lvol": { 00:19:48.705 "lvol_store_uuid": "d4e8b217-9877-40b2-856c-fca636b4ce6e", 00:19:48.705 "base_bdev": "nvme0n1", 00:19:48.705 "thin_provision": true, 00:19:48.705 "num_allocated_clusters": 0, 00:19:48.705 "snapshot": false, 00:19:48.705 "clone": false, 00:19:48.705 "esnap_clone": false 00:19:48.705 } 00:19:48.705 } 00:19:48.705 } 00:19:48.705 ]' 00:19:48.705 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:48.705 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:48.705 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:48.967 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:48.967 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:48.967 03:04:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:48.967 03:04:19 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:48.968 03:04:19 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 131ea324-8467-4149-9cd8-d2cd175545f7 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:48.968 [2024-12-05 03:04:19.745947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.745987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.968 [2024-12-05 03:04:19.746001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.968 [2024-12-05 03:04:19.746007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.748233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.748366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.968 [2024-12-05 03:04:19.748383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:19:48.968 [2024-12-05 03:04:19.748390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.748658] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.968 [2024-12-05 03:04:19.749309] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.968 [2024-12-05 03:04:19.749335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.749343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.968 [2024-12-05 03:04:19.749351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:19:48.968 [2024-12-05 03:04:19.749357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.749456] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 39a924c5-8443-42fa-9f63-e8e457595e05 00:19:48.968 [2024-12-05 03:04:19.750401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.750431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:48.968 [2024-12-05 03:04:19.750438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:48.968 [2024-12-05 03:04:19.750446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.755179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.755299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.968 [2024-12-05 03:04:19.755312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:19:48.968 [2024-12-05 03:04:19.755319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.755422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.755432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.968 [2024-12-05 03:04:19.755439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:48.968 [2024-12-05 03:04:19.755450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.755476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.755483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.968 [2024-12-05 03:04:19.755489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.968 [2024-12-05 03:04:19.755497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.755520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:48.968 [2024-12-05 03:04:19.758426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.758528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.968 [2024-12-05 03:04:19.758543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:19:48.968 [2024-12-05 03:04:19.758549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.758584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.758600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.968 [2024-12-05 03:04:19.758608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.968 [2024-12-05 03:04:19.758614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.758639] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:48.968 [2024-12-05 03:04:19.758747] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.968 [2024-12-05 03:04:19.758759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.968 [2024-12-05 03:04:19.758768] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:48.968 [2024-12-05 03:04:19.758777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.968 [2024-12-05 03:04:19.758783] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.968 [2024-12-05 03:04:19.758790] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:48.968 [2024-12-05 03:04:19.758796] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.968 [2024-12-05 03:04:19.758804] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.968 [2024-12-05 03:04:19.758811] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.968 [2024-12-05 03:04:19.758818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.758824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.968 [2024-12-05 03:04:19.758831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:19:48.968 [2024-12-05 03:04:19.758837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.758908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.968 [2024-12-05 03:04:19.758915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.968 [2024-12-05 03:04:19.758922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:48.968 [2024-12-05 03:04:19.758928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.968 [2024-12-05 03:04:19.759021] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.968 [2024-12-05 03:04:19.759028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.968 [2024-12-05 03:04:19.759035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.968 [2024-12-05 03:04:19.759053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.968 [2024-12-05 03:04:19.759086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.968 [2024-12-05 03:04:19.759098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.968 [2024-12-05 03:04:19.759103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:48.968 [2024-12-05 03:04:19.759111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.968 [2024-12-05 03:04:19.759117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.968 [2024-12-05 03:04:19.759123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:48.968 [2024-12-05 03:04:19.759130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.968 [2024-12-05 03:04:19.759143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.968 [2024-12-05 03:04:19.759160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.968 [2024-12-05 03:04:19.759177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.968 [2024-12-05 03:04:19.759195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.968 [2024-12-05 03:04:19.759212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.968 [2024-12-05 03:04:19.759225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.968 [2024-12-05 03:04:19.759233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.968 [2024-12-05 03:04:19.759244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.968 [2024-12-05 03:04:19.759249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:48.968 [2024-12-05 03:04:19.759255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.968 [2024-12-05 03:04:19.759260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.968 [2024-12-05 03:04:19.759267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:48.968 [2024-12-05 03:04:19.759272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.968 [2024-12-05 03:04:19.759279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.968 [2024-12-05 03:04:19.759283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:48.969 [2024-12-05 03:04:19.759289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.969 [2024-12-05 03:04:19.759294] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.969 [2024-12-05 03:04:19.759301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.969 [2024-12-05 03:04:19.759306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.969 [2024-12-05 03:04:19.759313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.969 [2024-12-05 03:04:19.759321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.969 [2024-12-05 03:04:19.759329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.969 [2024-12-05 03:04:19.759334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.969 [2024-12-05 03:04:19.759340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.969 [2024-12-05 03:04:19.759345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.969 [2024-12-05 03:04:19.759351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.969 [2024-12-05 03:04:19.759358] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.969 [2024-12-05 03:04:19.759366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:48.969 [2024-12-05 03:04:19.759380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:48.969 [2024-12-05 03:04:19.759385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:48.969 [2024-12-05 03:04:19.759392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:48.969 [2024-12-05 03:04:19.759398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:48.969 [2024-12-05 03:04:19.759404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:48.969 [2024-12-05 03:04:19.759410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:48.969 [2024-12-05 03:04:19.759416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:48.969 [2024-12-05 03:04:19.759422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:48.969 [2024-12-05 03:04:19.759431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:48.969 [2024-12-05 03:04:19.759460] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.969 [2024-12-05 03:04:19.759470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.969 [2024-12-05 03:04:19.759483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.969 [2024-12-05 03:04:19.759488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.969 [2024-12-05 03:04:19.759494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.969 [2024-12-05 03:04:19.759500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.969 [2024-12-05 03:04:19.759507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.969 [2024-12-05 03:04:19.759512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:19:48.969 [2024-12-05 03:04:19.759518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.969 [2024-12-05 03:04:19.759583] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:48.969 [2024-12-05 03:04:19.759593] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:51.515 [2024-12-05 03:04:22.340242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.515 [2024-12-05 03:04:22.340328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:51.515 [2024-12-05 03:04:22.340344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2580.646 ms 00:19:51.515 [2024-12-05 03:04:22.340355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.773 [2024-12-05 03:04:22.368512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.368564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.774 [2024-12-05 03:04:22.368577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.895 ms 00:19:51.774 [2024-12-05 03:04:22.368587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.368719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.368732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.774 [2024-12-05 03:04:22.368759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:51.774 [2024-12-05 03:04:22.368775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.412701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.412747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.774 [2024-12-05 03:04:22.412760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.891 ms 00:19:51.774 [2024-12-05 03:04:22.412772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.412855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.412868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.774 [2024-12-05 03:04:22.412878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.774 [2024-12-05 03:04:22.412888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.413342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.413364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.774 [2024-12-05 03:04:22.413374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:19:51.774 [2024-12-05 03:04:22.413384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.413504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.413516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.774 [2024-12-05 03:04:22.413541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:51.774 [2024-12-05 03:04:22.413553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.429462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.429494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.774 [2024-12-05 03:04:22.429504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.878 ms 00:19:51.774 [2024-12-05 03:04:22.429514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.441610] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:51.774 [2024-12-05 03:04:22.458984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.459014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.774 [2024-12-05 03:04:22.459027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.357 ms 00:19:51.774 [2024-12-05 03:04:22.459035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.538948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.538993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:51.774 [2024-12-05 03:04:22.539009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.811 ms 00:19:51.774 [2024-12-05 03:04:22.539017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.539276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.539290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.774 [2024-12-05 03:04:22.539304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:51.774 [2024-12-05 03:04:22.539312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.562741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.562773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:51.774 [2024-12-05 03:04:22.562786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.395 ms 00:19:51.774 [2024-12-05 03:04:22.562797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.585993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.586023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:51.774 [2024-12-05 03:04:22.586037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.121 ms 00:19:51.774 [2024-12-05 03:04:22.586044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.774 [2024-12-05 03:04:22.586647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.774 [2024-12-05 03:04:22.586672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.774 [2024-12-05 03:04:22.586684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:51.774 [2024-12-05 03:04:22.586692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.033 [2024-12-05 03:04:22.660396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.033 [2024-12-05 03:04:22.660642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:52.033 [2024-12-05 03:04:22.660667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.671 ms 00:19:52.033 [2024-12-05 03:04:22.660676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.033 [2024-12-05 03:04:22.685478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.033 [2024-12-05 03:04:22.685511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:52.033 [2024-12-05 03:04:22.685525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.703 ms 00:19:52.033 [2024-12-05 03:04:22.685534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.033 [2024-12-05 03:04:22.708477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.034 [2024-12-05 03:04:22.708507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:52.034 [2024-12-05 03:04:22.708520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:19:52.034 [2024-12-05 03:04:22.708528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.034 [2024-12-05 03:04:22.731567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.034 [2024-12-05 03:04:22.731695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.034 [2024-12-05 03:04:22.731715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.970 ms 00:19:52.034 [2024-12-05 03:04:22.731723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.034 [2024-12-05 03:04:22.731782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.034 [2024-12-05 03:04:22.731793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.034 [2024-12-05 03:04:22.731806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:52.034 [2024-12-05 03:04:22.731814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.034 [2024-12-05 03:04:22.731896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.034 [2024-12-05 03:04:22.731905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.034 [2024-12-05 03:04:22.731915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:52.034 [2024-12-05 03:04:22.731922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.034 [2024-12-05 03:04:22.733286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.034 [2024-12-05 03:04:22.736170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2987.004 ms, result 0 00:19:52.034 [2024-12-05 03:04:22.736987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.034 { 00:19:52.034 "name": "ftl0", 00:19:52.034 "uuid": "39a924c5-8443-42fa-9f63-e8e457595e05" 00:19:52.034 } 00:19:52.034 03:04:22 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:52.034 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:52.034 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:52.034 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:52.034 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:52.034 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:52.034 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:52.293 03:04:22 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:52.551 [ 00:19:52.551 { 00:19:52.551 "name": "ftl0", 00:19:52.551 "aliases": [ 00:19:52.551 "39a924c5-8443-42fa-9f63-e8e457595e05" 00:19:52.551 ], 00:19:52.551 "product_name": "FTL disk", 00:19:52.551 "block_size": 4096, 00:19:52.551 "num_blocks": 23592960, 00:19:52.551 "uuid": "39a924c5-8443-42fa-9f63-e8e457595e05", 00:19:52.551 "assigned_rate_limits": { 00:19:52.551 "rw_ios_per_sec": 0, 00:19:52.551 "rw_mbytes_per_sec": 0, 00:19:52.551 "r_mbytes_per_sec": 0, 00:19:52.551 "w_mbytes_per_sec": 0 00:19:52.551 }, 00:19:52.551 "claimed": false, 00:19:52.551 "zoned": false, 00:19:52.551 "supported_io_types": { 00:19:52.551 "read": true, 00:19:52.551 "write": true, 00:19:52.551 "unmap": true, 00:19:52.551 "flush": true, 00:19:52.551 "reset": false, 00:19:52.551 "nvme_admin": false, 00:19:52.551 "nvme_io": false, 00:19:52.551 "nvme_io_md": false, 00:19:52.551 "write_zeroes": true, 00:19:52.551 "zcopy": false, 00:19:52.551 "get_zone_info": false, 00:19:52.551 "zone_management": false, 00:19:52.551 "zone_append": false, 00:19:52.551 "compare": false, 00:19:52.551 "compare_and_write": false, 00:19:52.551 "abort": false, 00:19:52.551 "seek_hole": false, 00:19:52.551 "seek_data": false, 00:19:52.551 "copy": false, 00:19:52.551 "nvme_iov_md": false 00:19:52.551 }, 00:19:52.552 "driver_specific": { 00:19:52.552 "ftl": { 00:19:52.552 "base_bdev": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:52.552 "cache": "nvc0n1p0" 00:19:52.552 } 00:19:52.552 } 00:19:52.552 } 00:19:52.552 ] 00:19:52.552 03:04:23 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:52.552 03:04:23 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:52.552 03:04:23 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:52.552 03:04:23 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:52.552 03:04:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:52.810 03:04:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:52.810 { 00:19:52.810 "name": "ftl0", 00:19:52.810 "aliases": [ 00:19:52.810 "39a924c5-8443-42fa-9f63-e8e457595e05" 00:19:52.810 ], 00:19:52.810 "product_name": "FTL disk", 00:19:52.810 "block_size": 4096, 00:19:52.810 "num_blocks": 23592960, 00:19:52.810 "uuid": "39a924c5-8443-42fa-9f63-e8e457595e05", 00:19:52.810 "assigned_rate_limits": { 00:19:52.810 "rw_ios_per_sec": 0, 00:19:52.810 "rw_mbytes_per_sec": 0, 00:19:52.810 "r_mbytes_per_sec": 0, 00:19:52.810 "w_mbytes_per_sec": 0 00:19:52.810 }, 00:19:52.810 "claimed": false, 00:19:52.810 "zoned": false, 00:19:52.810 "supported_io_types": { 00:19:52.810 "read": true, 00:19:52.810 "write": true, 00:19:52.810 "unmap": true, 00:19:52.810 "flush": true, 00:19:52.810 "reset": false, 00:19:52.810 "nvme_admin": false, 00:19:52.810 "nvme_io": false, 00:19:52.810 "nvme_io_md": false, 00:19:52.810 "write_zeroes": true, 00:19:52.810 "zcopy": false, 00:19:52.810 "get_zone_info": false, 00:19:52.810 "zone_management": false, 00:19:52.810 "zone_append": false, 00:19:52.810 "compare": false, 00:19:52.810 "compare_and_write": false, 00:19:52.810 "abort": false, 00:19:52.810 "seek_hole": false, 00:19:52.810 "seek_data": false, 00:19:52.810 "copy": false, 00:19:52.810 "nvme_iov_md": false 00:19:52.810 }, 00:19:52.810 "driver_specific": { 00:19:52.810 "ftl": { 00:19:52.810 "base_bdev": "131ea324-8467-4149-9cd8-d2cd175545f7", 00:19:52.810 "cache": "nvc0n1p0" 00:19:52.810 } 00:19:52.810 } 00:19:52.810 } 00:19:52.810 ]' 00:19:52.810 03:04:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:52.810 03:04:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:52.810 03:04:23 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:53.069 [2024-12-05 03:04:23.780145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.069 [2024-12-05 03:04:23.780183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.069 [2024-12-05 03:04:23.780194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.069 [2024-12-05 03:04:23.780203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.069 [2024-12-05 03:04:23.780229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.069 [2024-12-05 03:04:23.782459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.069 [2024-12-05 03:04:23.782582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.069 [2024-12-05 03:04:23.782604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.216 ms 00:19:53.069 [2024-12-05 03:04:23.782610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.069 [2024-12-05 03:04:23.783063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.069 [2024-12-05 03:04:23.783087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.069 [2024-12-05 03:04:23.783097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:19:53.069 [2024-12-05 03:04:23.783104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.069 [2024-12-05 03:04:23.785838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.069 [2024-12-05 03:04:23.785919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.069 [2024-12-05 03:04:23.785941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:19:53.069 [2024-12-05 03:04:23.785949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.069 [2024-12-05 03:04:23.791377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.069 [2024-12-05 03:04:23.791401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.069 [2024-12-05 03:04:23.791410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.388 ms 00:19:53.069 [2024-12-05 03:04:23.791418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.069 [2024-12-05 03:04:23.809112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.069 [2024-12-05 03:04:23.809139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.069 [2024-12-05 03:04:23.809152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.621 ms 00:19:53.069 [2024-12-05 03:04:23.809158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.821617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.070 [2024-12-05 03:04:23.821644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.070 [2024-12-05 03:04:23.821657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.410 ms 00:19:53.070 [2024-12-05 03:04:23.821664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.821818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.070 [2024-12-05 03:04:23.821827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.070 [2024-12-05 03:04:23.821836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:53.070 [2024-12-05 03:04:23.821841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.839442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.070 [2024-12-05 03:04:23.839466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.070 [2024-12-05 03:04:23.839476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.568 ms 00:19:53.070 [2024-12-05 03:04:23.839482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.857146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.070 [2024-12-05 03:04:23.857171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.070 [2024-12-05 03:04:23.857183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.603 ms 00:19:53.070 [2024-12-05 03:04:23.857188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.874000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.070 [2024-12-05 03:04:23.874023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.070 [2024-12-05 03:04:23.874032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.766 ms 00:19:53.070 [2024-12-05 03:04:23.874038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.891102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.070 [2024-12-05 03:04:23.891124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.070 [2024-12-05 03:04:23.891134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.956 ms 00:19:53.070 [2024-12-05 03:04:23.891139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.070 [2024-12-05 03:04:23.891185] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.070 [2024-12-05 03:04:23.891197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.070 [2024-12-05 03:04:23.891549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.071 [2024-12-05 03:04:23.891902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.071 [2024-12-05 03:04:23.891911] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:19:53.071 [2024-12-05 03:04:23.891917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.071 [2024-12-05 03:04:23.891924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.071 [2024-12-05 03:04:23.891932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.071 [2024-12-05 03:04:23.891939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.071 [2024-12-05 03:04:23.891945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.071 [2024-12-05 03:04:23.891952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.071 [2024-12-05 03:04:23.891958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.071 [2024-12-05 03:04:23.891965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.071 [2024-12-05 03:04:23.891970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.071 [2024-12-05 03:04:23.891976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.071 [2024-12-05 03:04:23.891982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.071 [2024-12-05 03:04:23.891990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:19:53.071 [2024-12-05 03:04:23.891996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.071 [2024-12-05 03:04:23.901790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.071 [2024-12-05 03:04:23.901814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.071 [2024-12-05 03:04:23.901825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.763 ms 00:19:53.071 [2024-12-05 03:04:23.901830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.071 [2024-12-05 03:04:23.902162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.071 [2024-12-05 03:04:23.902172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.071 [2024-12-05 03:04:23.902181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:53.071 [2024-12-05 03:04:23.902187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.330 [2024-12-05 03:04:23.938598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.330 [2024-12-05 03:04:23.938624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.331 [2024-12-05 03:04:23.938635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:23.938642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:23.938720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:23.938728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.331 [2024-12-05 03:04:23.938737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:23.938743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:23.938799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:23.938809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.331 [2024-12-05 03:04:23.938819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:23.938826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:23.938851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:23.938858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.331 [2024-12-05 03:04:23.938866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:23.938871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.005174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.005205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.331 [2024-12-05 03:04:24.005217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.005224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.055899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.055931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.331 [2024-12-05 03:04:24.055942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.055950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.056039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.331 [2024-12-05 03:04:24.056052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.056058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.056157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.331 [2024-12-05 03:04:24.056165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.056172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.056287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.331 [2024-12-05 03:04:24.056295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.056303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.056360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.331 [2024-12-05 03:04:24.056368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.056373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.056432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.331 [2024-12-05 03:04:24.056442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.056449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.331 [2024-12-05 03:04:24.056513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.331 [2024-12-05 03:04:24.056521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.331 [2024-12-05 03:04:24.056527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.331 [2024-12-05 03:04:24.056706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.543 ms, result 0 00:19:53.331 true 00:19:53.331 03:04:24 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76315 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76315 ']' 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76315 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76315 00:19:53.331 killing process with pid 76315 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76315' 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76315 00:19:53.331 03:04:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76315 00:19:59.912 03:04:29 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:59.912 65536+0 records in 00:19:59.912 65536+0 records out 00:19:59.912 268435456 bytes (268 MB, 256 MiB) copied, 0.999438 s, 269 MB/s 00:19:59.912 03:04:30 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:59.912 [2024-12-05 03:04:30.732922] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:59.912 [2024-12-05 03:04:30.733042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76498 ] 00:20:00.172 [2024-12-05 03:04:30.890598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.172 [2024-12-05 03:04:30.980510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.429 [2024-12-05 03:04:31.212604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.430 [2024-12-05 03:04:31.212660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.691 [2024-12-05 03:04:31.369131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.369290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.691 [2024-12-05 03:04:31.369307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.691 [2024-12-05 03:04:31.369315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.371518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.371548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.691 [2024-12-05 03:04:31.371556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:20:00.691 [2024-12-05 03:04:31.371562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.372264] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.691 [2024-12-05 03:04:31.372844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.691 [2024-12-05 03:04:31.372866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.372874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.691 [2024-12-05 03:04:31.372881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:20:00.691 [2024-12-05 03:04:31.372887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.374423] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.691 [2024-12-05 03:04:31.385005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.385030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.691 [2024-12-05 03:04:31.385040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.584 ms 00:20:00.691 [2024-12-05 03:04:31.385046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.385128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.385138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.691 [2024-12-05 03:04:31.385145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:00.691 [2024-12-05 03:04:31.385151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.391297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.391417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.691 [2024-12-05 03:04:31.391430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:20:00.691 [2024-12-05 03:04:31.391436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.391512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.391520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.691 [2024-12-05 03:04:31.391527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:00.691 [2024-12-05 03:04:31.391533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.391551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.391558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.691 [2024-12-05 03:04:31.391564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.691 [2024-12-05 03:04:31.391571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.391589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.691 [2024-12-05 03:04:31.394510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.394605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.691 [2024-12-05 03:04:31.394617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:20:00.691 [2024-12-05 03:04:31.394623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.394658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.394665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.691 [2024-12-05 03:04:31.394672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:00.691 [2024-12-05 03:04:31.394678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.394694] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.691 [2024-12-05 03:04:31.394710] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.691 [2024-12-05 03:04:31.394738] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.691 [2024-12-05 03:04:31.394750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.691 [2024-12-05 03:04:31.394832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.691 [2024-12-05 03:04:31.394842] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.691 [2024-12-05 03:04:31.394850] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.691 [2024-12-05 03:04:31.394860] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.691 [2024-12-05 03:04:31.394867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.691 [2024-12-05 03:04:31.394874] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.691 [2024-12-05 03:04:31.394879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.691 [2024-12-05 03:04:31.394885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.691 [2024-12-05 03:04:31.394891] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.691 [2024-12-05 03:04:31.394897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.394903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.691 [2024-12-05 03:04:31.394910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:20:00.691 [2024-12-05 03:04:31.394915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.394982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.691 [2024-12-05 03:04:31.394991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.691 [2024-12-05 03:04:31.394997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:00.691 [2024-12-05 03:04:31.395002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.691 [2024-12-05 03:04:31.395092] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.691 [2024-12-05 03:04:31.395101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.691 [2024-12-05 03:04:31.395108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.691 [2024-12-05 03:04:31.395115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.691 [2024-12-05 03:04:31.395121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.692 [2024-12-05 03:04:31.395127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.692 [2024-12-05 03:04:31.395144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.692 [2024-12-05 03:04:31.395157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.692 [2024-12-05 03:04:31.395168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.692 [2024-12-05 03:04:31.395173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.692 [2024-12-05 03:04:31.395179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.692 [2024-12-05 03:04:31.395184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.692 [2024-12-05 03:04:31.395190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.692 [2024-12-05 03:04:31.395200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.692 [2024-12-05 03:04:31.395217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.692 [2024-12-05 03:04:31.395232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.692 [2024-12-05 03:04:31.395247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.692 [2024-12-05 03:04:31.395263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.692 [2024-12-05 03:04:31.395277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.692 [2024-12-05 03:04:31.395287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.692 [2024-12-05 03:04:31.395292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.692 [2024-12-05 03:04:31.395296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.692 [2024-12-05 03:04:31.395301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.692 [2024-12-05 03:04:31.395307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.692 [2024-12-05 03:04:31.395313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.692 [2024-12-05 03:04:31.395323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.692 [2024-12-05 03:04:31.395329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395335] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.692 [2024-12-05 03:04:31.395343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.692 [2024-12-05 03:04:31.395351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.692 [2024-12-05 03:04:31.395363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.692 [2024-12-05 03:04:31.395368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.692 [2024-12-05 03:04:31.395373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.692 [2024-12-05 03:04:31.395378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.692 [2024-12-05 03:04:31.395383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.692 [2024-12-05 03:04:31.395387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.692 [2024-12-05 03:04:31.395394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.692 [2024-12-05 03:04:31.395401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.692 [2024-12-05 03:04:31.395413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.692 [2024-12-05 03:04:31.395419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.692 [2024-12-05 03:04:31.395424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.692 [2024-12-05 03:04:31.395429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.692 [2024-12-05 03:04:31.395435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.692 [2024-12-05 03:04:31.395440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.692 [2024-12-05 03:04:31.395445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.692 [2024-12-05 03:04:31.395451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.692 [2024-12-05 03:04:31.395456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.692 [2024-12-05 03:04:31.395483] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.692 [2024-12-05 03:04:31.395489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.692 [2024-12-05 03:04:31.395502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.692 [2024-12-05 03:04:31.395507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.692 [2024-12-05 03:04:31.395513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.692 [2024-12-05 03:04:31.395519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.692 [2024-12-05 03:04:31.395528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.692 [2024-12-05 03:04:31.395533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:20:00.692 [2024-12-05 03:04:31.395539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.692 [2024-12-05 03:04:31.419725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.692 [2024-12-05 03:04:31.419756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.692 [2024-12-05 03:04:31.419764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.124 ms 00:20:00.692 [2024-12-05 03:04:31.419771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.692 [2024-12-05 03:04:31.419867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.692 [2024-12-05 03:04:31.419875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.692 [2024-12-05 03:04:31.419882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:00.692 [2024-12-05 03:04:31.419888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.692 [2024-12-05 03:04:31.459237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.692 [2024-12-05 03:04:31.459268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.692 [2024-12-05 03:04:31.459280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.332 ms 00:20:00.692 [2024-12-05 03:04:31.459287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.692 [2024-12-05 03:04:31.459348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.692 [2024-12-05 03:04:31.459357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.692 [2024-12-05 03:04:31.459364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.692 [2024-12-05 03:04:31.459370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.692 [2024-12-05 03:04:31.459746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.692 [2024-12-05 03:04:31.459759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.693 [2024-12-05 03:04:31.459768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:00.693 [2024-12-05 03:04:31.459778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.693 [2024-12-05 03:04:31.459896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.693 [2024-12-05 03:04:31.459911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.693 [2024-12-05 03:04:31.459918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:00.693 [2024-12-05 03:04:31.459924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.693 [2024-12-05 03:04:31.472128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.693 [2024-12-05 03:04:31.472151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.693 [2024-12-05 03:04:31.472159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.186 ms 00:20:00.693 [2024-12-05 03:04:31.472166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.693 [2024-12-05 03:04:31.493322] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:00.693 [2024-12-05 03:04:31.493368] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.693 [2024-12-05 03:04:31.493383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.693 [2024-12-05 03:04:31.493392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.693 [2024-12-05 03:04:31.493403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.113 ms 00:20:00.693 [2024-12-05 03:04:31.493410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.693 [2024-12-05 03:04:31.518475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.693 [2024-12-05 03:04:31.518613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.693 [2024-12-05 03:04:31.518630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:20:00.693 [2024-12-05 03:04:31.518638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.693 [2024-12-05 03:04:31.530566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.693 [2024-12-05 03:04:31.530606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.693 [2024-12-05 03:04:31.530618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.633 ms 00:20:00.693 [2024-12-05 03:04:31.530625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.542171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.542202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.954 [2024-12-05 03:04:31.542212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.480 ms 00:20:00.954 [2024-12-05 03:04:31.542219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.542833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.542851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.954 [2024-12-05 03:04:31.542860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:00.954 [2024-12-05 03:04:31.542867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.599047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.599109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.954 [2024-12-05 03:04:31.599121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.158 ms 00:20:00.954 [2024-12-05 03:04:31.599130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.610119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:00.954 [2024-12-05 03:04:31.623716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.623750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.954 [2024-12-05 03:04:31.623762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.500 ms 00:20:00.954 [2024-12-05 03:04:31.623769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.623840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.623851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.954 [2024-12-05 03:04:31.623859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.954 [2024-12-05 03:04:31.623867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.623911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.623920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.954 [2024-12-05 03:04:31.623928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:00.954 [2024-12-05 03:04:31.623935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.954 [2024-12-05 03:04:31.623967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.954 [2024-12-05 03:04:31.623977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.954 [2024-12-05 03:04:31.623985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:00.955 [2024-12-05 03:04:31.623992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.955 [2024-12-05 03:04:31.624021] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.955 [2024-12-05 03:04:31.624031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.955 [2024-12-05 03:04:31.624038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.955 [2024-12-05 03:04:31.624046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.955 [2024-12-05 03:04:31.624053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.955 [2024-12-05 03:04:31.647725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.955 [2024-12-05 03:04:31.647757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.955 [2024-12-05 03:04:31.647768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.626 ms 00:20:00.955 [2024-12-05 03:04:31.647776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.955 [2024-12-05 03:04:31.647856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.955 [2024-12-05 03:04:31.647866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.955 [2024-12-05 03:04:31.647875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:00.955 [2024-12-05 03:04:31.647882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.955 [2024-12-05 03:04:31.649019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.955 [2024-12-05 03:04:31.652122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.609 ms, result 0 00:20:00.955 [2024-12-05 03:04:31.653368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.955 [2024-12-05 03:04:31.666280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.897  [2024-12-05T03:04:33.685Z] Copying: 15/256 [MB] (15 MBps) [2024-12-05T03:04:35.073Z] Copying: 32/256 [MB] (16 MBps) [2024-12-05T03:04:36.015Z] Copying: 49/256 [MB] (16 MBps) [2024-12-05T03:04:36.960Z] Copying: 65/256 [MB] (15 MBps) [2024-12-05T03:04:37.905Z] Copying: 84/256 [MB] (19 MBps) [2024-12-05T03:04:38.850Z] Copying: 101/256 [MB] (16 MBps) [2024-12-05T03:04:39.795Z] Copying: 116/256 [MB] (15 MBps) [2024-12-05T03:04:40.734Z] Copying: 130/256 [MB] (13 MBps) [2024-12-05T03:04:41.677Z] Copying: 159/256 [MB] (28 MBps) [2024-12-05T03:04:43.065Z] Copying: 175/256 [MB] (16 MBps) [2024-12-05T03:04:44.010Z] Copying: 189/256 [MB] (13 MBps) [2024-12-05T03:04:44.951Z] Copying: 201/256 [MB] (11 MBps) [2024-12-05T03:04:45.897Z] Copying: 222/256 [MB] (21 MBps) [2024-12-05T03:04:45.897Z] Copying: 254/256 [MB] (32 MBps) [2024-12-05T03:04:45.897Z] Copying: 256/256 [MB] (average 18 MBps)[2024-12-05 03:04:45.722703] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.053 [2024-12-05 03:04:45.730308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.730411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.053 [2024-12-05 03:04:45.730485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.053 [2024-12-05 03:04:45.730519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.730669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.053 [2024-12-05 03:04:45.732773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.732864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.053 [2024-12-05 03:04:45.733059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:20:15.053 [2024-12-05 03:04:45.733095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.734793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.734817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.053 [2024-12-05 03:04:45.734828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:20:15.053 [2024-12-05 03:04:45.734836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.740666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.740699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.053 [2024-12-05 03:04:45.740710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.813 ms 00:20:15.053 [2024-12-05 03:04:45.740719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.746148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.746177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.053 [2024-12-05 03:04:45.746189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:20:15.053 [2024-12-05 03:04:45.746198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.763979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.764085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.053 [2024-12-05 03:04:45.764101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.737 ms 00:20:15.053 [2024-12-05 03:04:45.764110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.775612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.053 [2024-12-05 03:04:45.775714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.053 [2024-12-05 03:04:45.775735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.469 ms 00:20:15.053 [2024-12-05 03:04:45.775744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.053 [2024-12-05 03:04:45.775871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.054 [2024-12-05 03:04:45.775889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.054 [2024-12-05 03:04:45.775899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:15.054 [2024-12-05 03:04:45.775916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.054 [2024-12-05 03:04:45.793977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.054 [2024-12-05 03:04:45.794002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.054 [2024-12-05 03:04:45.794012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.043 ms 00:20:15.054 [2024-12-05 03:04:45.794020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.054 [2024-12-05 03:04:45.811115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.054 [2024-12-05 03:04:45.811141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.054 [2024-12-05 03:04:45.811151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.048 ms 00:20:15.054 [2024-12-05 03:04:45.811159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.054 [2024-12-05 03:04:45.828337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.054 [2024-12-05 03:04:45.828362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.054 [2024-12-05 03:04:45.828372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.144 ms 00:20:15.054 [2024-12-05 03:04:45.828379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.054 [2024-12-05 03:04:45.846092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.054 [2024-12-05 03:04:45.846118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.054 [2024-12-05 03:04:45.846129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.653 ms 00:20:15.054 [2024-12-05 03:04:45.846137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.054 [2024-12-05 03:04:45.846170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.054 [2024-12-05 03:04:45.846185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.054 [2024-12-05 03:04:45.846922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.846991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.055 [2024-12-05 03:04:45.847210] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.055 [2024-12-05 03:04:45.847220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:20:15.055 [2024-12-05 03:04:45.847230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.055 [2024-12-05 03:04:45.847240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.055 [2024-12-05 03:04:45.847249] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.055 [2024-12-05 03:04:45.847258] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.055 [2024-12-05 03:04:45.847267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.055 [2024-12-05 03:04:45.847277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.055 [2024-12-05 03:04:45.847286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.055 [2024-12-05 03:04:45.847295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.055 [2024-12-05 03:04:45.847304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.055 [2024-12-05 03:04:45.847313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.055 [2024-12-05 03:04:45.847325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.055 [2024-12-05 03:04:45.847336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:20:15.055 [2024-12-05 03:04:45.847345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.055 [2024-12-05 03:04:45.857666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.055 [2024-12-05 03:04:45.857764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.055 [2024-12-05 03:04:45.857780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.301 ms 00:20:15.055 [2024-12-05 03:04:45.857789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.055 [2024-12-05 03:04:45.858125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.055 [2024-12-05 03:04:45.858141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.055 [2024-12-05 03:04:45.858151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:15.055 [2024-12-05 03:04:45.858159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.055 [2024-12-05 03:04:45.885316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.055 [2024-12-05 03:04:45.885344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.055 [2024-12-05 03:04:45.885355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.055 [2024-12-05 03:04:45.885363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.055 [2024-12-05 03:04:45.885439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.055 [2024-12-05 03:04:45.885450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.055 [2024-12-05 03:04:45.885460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.055 [2024-12-05 03:04:45.885469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.055 [2024-12-05 03:04:45.885516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.055 [2024-12-05 03:04:45.885527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.055 [2024-12-05 03:04:45.885538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.055 [2024-12-05 03:04:45.885547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.055 [2024-12-05 03:04:45.885566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.055 [2024-12-05 03:04:45.885578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.055 [2024-12-05 03:04:45.885589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.055 [2024-12-05 03:04:45.885598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.944343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.944471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.315 [2024-12-05 03:04:45.944488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.944497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.992980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.315 [2024-12-05 03:04:45.993024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.315 [2024-12-05 03:04:45.993124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.315 [2024-12-05 03:04:45.993187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.315 [2024-12-05 03:04:45.993320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.315 [2024-12-05 03:04:45.993385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.315 [2024-12-05 03:04:45.993458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.315 [2024-12-05 03:04:45.993523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.315 [2024-12-05 03:04:45.993536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.315 [2024-12-05 03:04:45.993546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.315 [2024-12-05 03:04:45.993687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.360 ms, result 0 00:20:15.883 00:20:15.883 00:20:15.883 03:04:46 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76668 00:20:15.883 03:04:46 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76668 00:20:15.883 03:04:46 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:15.883 03:04:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76668 ']' 00:20:15.883 03:04:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.883 03:04:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.883 03:04:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.883 03:04:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.883 03:04:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:15.883 [2024-12-05 03:04:46.646684] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:15.883 [2024-12-05 03:04:46.647205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76668 ] 00:20:16.144 [2024-12-05 03:04:46.797240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.144 [2024-12-05 03:04:46.871037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.784 03:04:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.784 03:04:47 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:16.784 03:04:47 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:17.059 [2024-12-05 03:04:47.666384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.059 [2024-12-05 03:04:47.666431] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.059 [2024-12-05 03:04:47.834583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.834616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.059 [2024-12-05 03:04:47.834628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.059 [2024-12-05 03:04:47.834635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.836698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.836824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.059 [2024-12-05 03:04:47.836839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:20:17.059 [2024-12-05 03:04:47.836845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.836903] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.059 [2024-12-05 03:04:47.837430] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.059 [2024-12-05 03:04:47.837449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.837455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.059 [2024-12-05 03:04:47.837467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:17.059 [2024-12-05 03:04:47.837472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.838435] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.059 [2024-12-05 03:04:47.847985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.848110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.059 [2024-12-05 03:04:47.848124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.553 ms 00:20:17.059 [2024-12-05 03:04:47.848132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.848198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.848209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.059 [2024-12-05 03:04:47.848215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.059 [2024-12-05 03:04:47.848222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.852493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.852521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.059 [2024-12-05 03:04:47.852529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:20:17.059 [2024-12-05 03:04:47.852537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.852618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.852627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.059 [2024-12-05 03:04:47.852633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:17.059 [2024-12-05 03:04:47.852642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.852659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.852666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.059 [2024-12-05 03:04:47.852672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.059 [2024-12-05 03:04:47.852678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.852694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.059 [2024-12-05 03:04:47.855328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.855429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.059 [2024-12-05 03:04:47.855444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:20:17.059 [2024-12-05 03:04:47.855449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.855481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.855487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.059 [2024-12-05 03:04:47.855495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:17.059 [2024-12-05 03:04:47.855502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.855518] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.059 [2024-12-05 03:04:47.855533] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.059 [2024-12-05 03:04:47.855566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.059 [2024-12-05 03:04:47.855578] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.059 [2024-12-05 03:04:47.855657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.059 [2024-12-05 03:04:47.855666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.059 [2024-12-05 03:04:47.855676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.059 [2024-12-05 03:04:47.855684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.059 [2024-12-05 03:04:47.855691] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.059 [2024-12-05 03:04:47.855697] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.059 [2024-12-05 03:04:47.855704] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.059 [2024-12-05 03:04:47.855710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.059 [2024-12-05 03:04:47.855718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.059 [2024-12-05 03:04:47.855723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.855730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.059 [2024-12-05 03:04:47.855736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:20:17.059 [2024-12-05 03:04:47.855742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.855809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.059 [2024-12-05 03:04:47.855816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.059 [2024-12-05 03:04:47.855822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:17.059 [2024-12-05 03:04:47.855828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.059 [2024-12-05 03:04:47.855904] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.059 [2024-12-05 03:04:47.855913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.059 [2024-12-05 03:04:47.855919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.059 [2024-12-05 03:04:47.855926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.059 [2024-12-05 03:04:47.855932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.059 [2024-12-05 03:04:47.855939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.059 [2024-12-05 03:04:47.855944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.059 [2024-12-05 03:04:47.855952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.059 [2024-12-05 03:04:47.855958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.059 [2024-12-05 03:04:47.855964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.059 [2024-12-05 03:04:47.855969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.059 [2024-12-05 03:04:47.855976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.059 [2024-12-05 03:04:47.855981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.059 [2024-12-05 03:04:47.855987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.059 [2024-12-05 03:04:47.855992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.060 [2024-12-05 03:04:47.855999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.060 [2024-12-05 03:04:47.856010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.060 [2024-12-05 03:04:47.856031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.060 [2024-12-05 03:04:47.856050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.060 [2024-12-05 03:04:47.856066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.060 [2024-12-05 03:04:47.856101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.060 [2024-12-05 03:04:47.856118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.060 [2024-12-05 03:04:47.856130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.060 [2024-12-05 03:04:47.856140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.060 [2024-12-05 03:04:47.856148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.060 [2024-12-05 03:04:47.856157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.060 [2024-12-05 03:04:47.856166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.060 [2024-12-05 03:04:47.856179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.060 [2024-12-05 03:04:47.856199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.060 [2024-12-05 03:04:47.856207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.060 [2024-12-05 03:04:47.856227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.060 [2024-12-05 03:04:47.856238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.060 [2024-12-05 03:04:47.856266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.060 [2024-12-05 03:04:47.856276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.060 [2024-12-05 03:04:47.856287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.060 [2024-12-05 03:04:47.856298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.060 [2024-12-05 03:04:47.856307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.060 [2024-12-05 03:04:47.856316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.060 [2024-12-05 03:04:47.856327] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.060 [2024-12-05 03:04:47.856338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.060 [2024-12-05 03:04:47.856362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.060 [2024-12-05 03:04:47.856372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.060 [2024-12-05 03:04:47.856381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.060 [2024-12-05 03:04:47.856392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.060 [2024-12-05 03:04:47.856403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.060 [2024-12-05 03:04:47.856414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.060 [2024-12-05 03:04:47.856428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.060 [2024-12-05 03:04:47.856439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.060 [2024-12-05 03:04:47.856448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.060 [2024-12-05 03:04:47.856504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.060 [2024-12-05 03:04:47.856514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.060 [2024-12-05 03:04:47.856537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.060 [2024-12-05 03:04:47.856548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.060 [2024-12-05 03:04:47.856558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.060 [2024-12-05 03:04:47.856570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.060 [2024-12-05 03:04:47.856583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.060 [2024-12-05 03:04:47.856595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:20:17.060 [2024-12-05 03:04:47.856602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.060 [2024-12-05 03:04:47.877412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.060 [2024-12-05 03:04:47.877439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.060 [2024-12-05 03:04:47.877448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.751 ms 00:20:17.060 [2024-12-05 03:04:47.877456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.060 [2024-12-05 03:04:47.877544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.060 [2024-12-05 03:04:47.877552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.060 [2024-12-05 03:04:47.877559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:17.060 [2024-12-05 03:04:47.877565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.322 [2024-12-05 03:04:47.901209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.322 [2024-12-05 03:04:47.901237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.322 [2024-12-05 03:04:47.901246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.626 ms 00:20:17.322 [2024-12-05 03:04:47.901252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.322 [2024-12-05 03:04:47.901296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.322 [2024-12-05 03:04:47.901304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.322 [2024-12-05 03:04:47.901311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:17.322 [2024-12-05 03:04:47.901317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.322 [2024-12-05 03:04:47.901600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.322 [2024-12-05 03:04:47.901610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.323 [2024-12-05 03:04:47.901620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:17.323 [2024-12-05 03:04:47.901625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.901723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.901730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.323 [2024-12-05 03:04:47.901737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:17.323 [2024-12-05 03:04:47.901743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.913209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.913314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.323 [2024-12-05 03:04:47.913328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.449 ms 00:20:17.323 [2024-12-05 03:04:47.913334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.943800] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.323 [2024-12-05 03:04:47.943935] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.323 [2024-12-05 03:04:47.943955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.943965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.323 [2024-12-05 03:04:47.943976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.545 ms 00:20:17.323 [2024-12-05 03:04:47.943988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.963519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.963547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.323 [2024-12-05 03:04:47.963558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.392 ms 00:20:17.323 [2024-12-05 03:04:47.963565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.972463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.972487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.323 [2024-12-05 03:04:47.972497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.840 ms 00:20:17.323 [2024-12-05 03:04:47.972503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.981003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.981026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.323 [2024-12-05 03:04:47.981035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.459 ms 00:20:17.323 [2024-12-05 03:04:47.981040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:47.981514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:47.981534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.323 [2024-12-05 03:04:47.981542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:20:17.323 [2024-12-05 03:04:47.981548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.025398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.025430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.323 [2024-12-05 03:04:48.025442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.831 ms 00:20:17.323 [2024-12-05 03:04:48.025448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.033276] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.323 [2024-12-05 03:04:48.044712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.044743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.323 [2024-12-05 03:04:48.044755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.206 ms 00:20:17.323 [2024-12-05 03:04:48.044762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.044830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.044840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.323 [2024-12-05 03:04:48.044846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:17.323 [2024-12-05 03:04:48.044853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.044890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.044898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.323 [2024-12-05 03:04:48.044904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:17.323 [2024-12-05 03:04:48.044912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.044930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.044938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.323 [2024-12-05 03:04:48.044944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.323 [2024-12-05 03:04:48.044952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.044975] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.323 [2024-12-05 03:04:48.044985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.044993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.323 [2024-12-05 03:04:48.045000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:17.323 [2024-12-05 03:04:48.045006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.062695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.062795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.323 [2024-12-05 03:04:48.062812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.670 ms 00:20:17.323 [2024-12-05 03:04:48.062819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.062926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.323 [2024-12-05 03:04:48.062939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.323 [2024-12-05 03:04:48.062947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:17.323 [2024-12-05 03:04:48.062955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.323 [2024-12-05 03:04:48.063611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.323 [2024-12-05 03:04:48.065825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 228.790 ms, result 0 00:20:17.323 [2024-12-05 03:04:48.066829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.323 Some configs were skipped because the RPC state that can call them passed over. 00:20:17.323 03:04:48 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:17.585 [2024-12-05 03:04:48.247531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.585 [2024-12-05 03:04:48.247690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:17.585 [2024-12-05 03:04:48.247752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:20:17.585 [2024-12-05 03:04:48.247779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.585 [2024-12-05 03:04:48.247832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.064 ms, result 0 00:20:17.585 true 00:20:17.585 03:04:48 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:17.585 [2024-12-05 03:04:48.407214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.585 [2024-12-05 03:04:48.407323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:17.585 [2024-12-05 03:04:48.407376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:20:17.585 [2024-12-05 03:04:48.407400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.585 [2024-12-05 03:04:48.407449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.519 ms, result 0 00:20:17.585 true 00:20:17.585 03:04:48 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76668 00:20:17.585 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76668 ']' 00:20:17.585 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76668 00:20:17.585 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:17.585 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.847 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76668 00:20:17.847 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.847 killing process with pid 76668 00:20:17.847 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.847 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76668' 00:20:17.847 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76668 00:20:17.847 03:04:48 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76668 00:20:18.419 [2024-12-05 03:04:49.237606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.419 [2024-12-05 03:04:49.237694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.420 [2024-12-05 03:04:49.237711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:18.420 [2024-12-05 03:04:49.237722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.420 [2024-12-05 03:04:49.237753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:18.420 [2024-12-05 03:04:49.241191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.420 [2024-12-05 03:04:49.241238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.420 [2024-12-05 03:04:49.241256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:20:18.420 [2024-12-05 03:04:49.241265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.420 [2024-12-05 03:04:49.241630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.420 [2024-12-05 03:04:49.241643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.420 [2024-12-05 03:04:49.241658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:18.420 [2024-12-05 03:04:49.241667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.420 [2024-12-05 03:04:49.246402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.420 [2024-12-05 03:04:49.246446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.420 [2024-12-05 03:04:49.246464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.709 ms 00:20:18.420 [2024-12-05 03:04:49.246472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.420 [2024-12-05 03:04:49.253523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.420 [2024-12-05 03:04:49.253566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:18.420 [2024-12-05 03:04:49.253586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.996 ms 00:20:18.420 [2024-12-05 03:04:49.253594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.264967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.265271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.682 [2024-12-05 03:04:49.265301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.296 ms 00:20:18.682 [2024-12-05 03:04:49.265310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.275282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.275332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.682 [2024-12-05 03:04:49.275347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.915 ms 00:20:18.682 [2024-12-05 03:04:49.275356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.275514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.275526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.682 [2024-12-05 03:04:49.275538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:18.682 [2024-12-05 03:04:49.275547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.287300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.287342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:18.682 [2024-12-05 03:04:49.287356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.725 ms 00:20:18.682 [2024-12-05 03:04:49.287364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.298395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.298436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:18.682 [2024-12-05 03:04:49.298457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.976 ms 00:20:18.682 [2024-12-05 03:04:49.298465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.308943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.309150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.682 [2024-12-05 03:04:49.309175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.425 ms 00:20:18.682 [2024-12-05 03:04:49.309184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.320115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.682 [2024-12-05 03:04:49.320307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.682 [2024-12-05 03:04:49.320332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.737 ms 00:20:18.682 [2024-12-05 03:04:49.320340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.682 [2024-12-05 03:04:49.320521] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.682 [2024-12-05 03:04:49.320554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.682 [2024-12-05 03:04:49.320731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.320995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.683 [2024-12-05 03:04:49.321560] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.683 [2024-12-05 03:04:49.321576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:20:18.683 [2024-12-05 03:04:49.321590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.683 [2024-12-05 03:04:49.321600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.683 [2024-12-05 03:04:49.321608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.683 [2024-12-05 03:04:49.321619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.683 [2024-12-05 03:04:49.321626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.683 [2024-12-05 03:04:49.321637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.683 [2024-12-05 03:04:49.321644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.683 [2024-12-05 03:04:49.321655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.684 [2024-12-05 03:04:49.321662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.684 [2024-12-05 03:04:49.321671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.684 [2024-12-05 03:04:49.321680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.684 [2024-12-05 03:04:49.321691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:20:18.684 [2024-12-05 03:04:49.321699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.336328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.684 [2024-12-05 03:04:49.336375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.684 [2024-12-05 03:04:49.336393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.582 ms 00:20:18.684 [2024-12-05 03:04:49.336402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.336874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.684 [2024-12-05 03:04:49.336897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.684 [2024-12-05 03:04:49.336914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:18.684 [2024-12-05 03:04:49.336923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.389714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.684 [2024-12-05 03:04:49.389761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.684 [2024-12-05 03:04:49.389776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.684 [2024-12-05 03:04:49.389786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.389887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.684 [2024-12-05 03:04:49.389898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.684 [2024-12-05 03:04:49.389915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.684 [2024-12-05 03:04:49.389924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.389981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.684 [2024-12-05 03:04:49.389993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.684 [2024-12-05 03:04:49.390007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.684 [2024-12-05 03:04:49.390015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.390038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.684 [2024-12-05 03:04:49.390046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.684 [2024-12-05 03:04:49.390057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.684 [2024-12-05 03:04:49.390091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.684 [2024-12-05 03:04:49.482714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.684 [2024-12-05 03:04:49.483018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.684 [2024-12-05 03:04:49.483046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.684 [2024-12-05 03:04:49.483056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.558482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.558538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.946 [2024-12-05 03:04:49.558554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.558567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.558655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.558667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.946 [2024-12-05 03:04:49.558682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.558691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.558730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.558741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.946 [2024-12-05 03:04:49.558753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.558761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.558874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.558887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.946 [2024-12-05 03:04:49.558898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.558908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.558952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.558963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.946 [2024-12-05 03:04:49.558976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.558984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.559043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.559054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.946 [2024-12-05 03:04:49.559104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.559114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.559183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.946 [2024-12-05 03:04:49.559197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.946 [2024-12-05 03:04:49.559209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.946 [2024-12-05 03:04:49.559219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.946 [2024-12-05 03:04:49.559412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.773 ms, result 0 00:20:19.520 03:04:50 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:19.520 03:04:50 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.782 [2024-12-05 03:04:50.363210] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:19.782 [2024-12-05 03:04:50.363318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76715 ] 00:20:19.782 [2024-12-05 03:04:50.513771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.782 [2024-12-05 03:04:50.604570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.044 [2024-12-05 03:04:50.816806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.044 [2024-12-05 03:04:50.816859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.308 [2024-12-05 03:04:50.968964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.969001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.308 [2024-12-05 03:04:50.969012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.308 [2024-12-05 03:04:50.969018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.971121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.971272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.308 [2024-12-05 03:04:50.971286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:20:20.308 [2024-12-05 03:04:50.971292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.971348] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.308 [2024-12-05 03:04:50.971895] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.308 [2024-12-05 03:04:50.971913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.971919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.308 [2024-12-05 03:04:50.971926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:20:20.308 [2024-12-05 03:04:50.971931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.972933] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.308 [2024-12-05 03:04:50.982618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.982733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.308 [2024-12-05 03:04:50.982747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.685 ms 00:20:20.308 [2024-12-05 03:04:50.982753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.982822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.982830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.308 [2024-12-05 03:04:50.982837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:20.308 [2024-12-05 03:04:50.982842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.987302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.987327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.308 [2024-12-05 03:04:50.987335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.431 ms 00:20:20.308 [2024-12-05 03:04:50.987340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.987411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.987420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.308 [2024-12-05 03:04:50.987426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:20.308 [2024-12-05 03:04:50.987432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.987449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.987455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.308 [2024-12-05 03:04:50.987461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.308 [2024-12-05 03:04:50.987466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.987484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:20.308 [2024-12-05 03:04:50.990211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.990233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.308 [2024-12-05 03:04:50.990240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:20:20.308 [2024-12-05 03:04:50.990246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.990275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.990282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.308 [2024-12-05 03:04:50.990288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:20.308 [2024-12-05 03:04:50.990293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.990309] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.308 [2024-12-05 03:04:50.990327] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:20.308 [2024-12-05 03:04:50.990353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.308 [2024-12-05 03:04:50.990365] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:20.308 [2024-12-05 03:04:50.990444] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:20.308 [2024-12-05 03:04:50.990452] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.308 [2024-12-05 03:04:50.990460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:20.308 [2024-12-05 03:04:50.990469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990482] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:20.308 [2024-12-05 03:04:50.990487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.308 [2024-12-05 03:04:50.990493] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:20.308 [2024-12-05 03:04:50.990498] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:20.308 [2024-12-05 03:04:50.990503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.990509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.308 [2024-12-05 03:04:50.990515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:20:20.308 [2024-12-05 03:04:50.990520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.990585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.308 [2024-12-05 03:04:50.990594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.308 [2024-12-05 03:04:50.990600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:20.308 [2024-12-05 03:04:50.990605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.308 [2024-12-05 03:04:50.990684] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.308 [2024-12-05 03:04:50.990692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.308 [2024-12-05 03:04:50.990697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.308 [2024-12-05 03:04:50.990718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.308 [2024-12-05 03:04:50.990734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.308 [2024-12-05 03:04:50.990744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.308 [2024-12-05 03:04:50.990753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:20.308 [2024-12-05 03:04:50.990758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.308 [2024-12-05 03:04:50.990764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.308 [2024-12-05 03:04:50.990769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:20.308 [2024-12-05 03:04:50.990774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.308 [2024-12-05 03:04:50.990786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.308 [2024-12-05 03:04:50.990801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.308 [2024-12-05 03:04:50.990816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.308 [2024-12-05 03:04:50.990831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.308 [2024-12-05 03:04:50.990846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.308 [2024-12-05 03:04:50.990856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.308 [2024-12-05 03:04:50.990861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:20.308 [2024-12-05 03:04:50.990866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.308 [2024-12-05 03:04:50.990871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.309 [2024-12-05 03:04:50.990876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:20.309 [2024-12-05 03:04:50.990881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.309 [2024-12-05 03:04:50.990886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:20.309 [2024-12-05 03:04:50.990891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:20.309 [2024-12-05 03:04:50.990896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.309 [2024-12-05 03:04:50.990901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:20.309 [2024-12-05 03:04:50.990906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:20.309 [2024-12-05 03:04:50.990911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.309 [2024-12-05 03:04:50.990916] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.309 [2024-12-05 03:04:50.990922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.309 [2024-12-05 03:04:50.990929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.309 [2024-12-05 03:04:50.990934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.309 [2024-12-05 03:04:50.990940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.309 [2024-12-05 03:04:50.990945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.309 [2024-12-05 03:04:50.990950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.309 [2024-12-05 03:04:50.990956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.309 [2024-12-05 03:04:50.990961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.309 [2024-12-05 03:04:50.990966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.309 [2024-12-05 03:04:50.990972] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.309 [2024-12-05 03:04:50.990978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.990985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:20.309 [2024-12-05 03:04:50.990990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:20.309 [2024-12-05 03:04:50.990996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:20.309 [2024-12-05 03:04:50.991001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:20.309 [2024-12-05 03:04:50.991007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:20.309 [2024-12-05 03:04:50.991012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:20.309 [2024-12-05 03:04:50.991017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:20.309 [2024-12-05 03:04:50.991022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:20.309 [2024-12-05 03:04:50.991027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:20.309 [2024-12-05 03:04:50.991033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.991038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.991043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.991049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.991054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:20.309 [2024-12-05 03:04:50.991059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.309 [2024-12-05 03:04:50.991065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.991090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.309 [2024-12-05 03:04:50.991095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.309 [2024-12-05 03:04:50.991101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.309 [2024-12-05 03:04:50.991107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.309 [2024-12-05 03:04:50.991113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:50.991121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.309 [2024-12-05 03:04:50.991127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:20:20.309 [2024-12-05 03:04:50.991132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.012202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.012229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.309 [2024-12-05 03:04:51.012237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.027 ms 00:20:20.309 [2024-12-05 03:04:51.012243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.012344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.012351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.309 [2024-12-05 03:04:51.012358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:20.309 [2024-12-05 03:04:51.012364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.050813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.050936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.309 [2024-12-05 03:04:51.050954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.432 ms 00:20:20.309 [2024-12-05 03:04:51.050961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.051025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.051034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.309 [2024-12-05 03:04:51.051041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.309 [2024-12-05 03:04:51.051047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.051371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.051383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.309 [2024-12-05 03:04:51.051391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:20.309 [2024-12-05 03:04:51.051403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.051506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.051513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.309 [2024-12-05 03:04:51.051519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:20.309 [2024-12-05 03:04:51.051524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.062459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.062485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.309 [2024-12-05 03:04:51.062493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.918 ms 00:20:20.309 [2024-12-05 03:04:51.062498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.072248] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:20.309 [2024-12-05 03:04:51.072283] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.309 [2024-12-05 03:04:51.072293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.072299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.309 [2024-12-05 03:04:51.072305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:20:20.309 [2024-12-05 03:04:51.072311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.091021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.091052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.309 [2024-12-05 03:04:51.091061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.661 ms 00:20:20.309 [2024-12-05 03:04:51.091067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.100077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.100102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.309 [2024-12-05 03:04:51.100110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.945 ms 00:20:20.309 [2024-12-05 03:04:51.100115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.108826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.108859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.309 [2024-12-05 03:04:51.108867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.667 ms 00:20:20.309 [2024-12-05 03:04:51.108872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.309 [2024-12-05 03:04:51.109346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.309 [2024-12-05 03:04:51.109362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.309 [2024-12-05 03:04:51.109369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:20.309 [2024-12-05 03:04:51.109375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.571 [2024-12-05 03:04:51.153571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.571 [2024-12-05 03:04:51.153714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:20.571 [2024-12-05 03:04:51.153728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.178 ms 00:20:20.572 [2024-12-05 03:04:51.153736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.161685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:20.572 [2024-12-05 03:04:51.173369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.173479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:20.572 [2024-12-05 03:04:51.173519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.575 ms 00:20:20.572 [2024-12-05 03:04:51.173540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.173619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.173640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:20.572 [2024-12-05 03:04:51.173656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:20.572 [2024-12-05 03:04:51.173671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.173716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.173733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:20.572 [2024-12-05 03:04:51.173749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:20.572 [2024-12-05 03:04:51.173821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.173861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.173878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:20.572 [2024-12-05 03:04:51.173893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:20.572 [2024-12-05 03:04:51.173907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.173940] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:20.572 [2024-12-05 03:04:51.173957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.173972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:20.572 [2024-12-05 03:04:51.173987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:20.572 [2024-12-05 03:04:51.174036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.192163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.192263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:20.572 [2024-12-05 03:04:51.192306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.096 ms 00:20:20.572 [2024-12-05 03:04:51.192323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.192395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.572 [2024-12-05 03:04:51.192416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:20.572 [2024-12-05 03:04:51.192431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:20.572 [2024-12-05 03:04:51.192446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.572 [2024-12-05 03:04:51.193364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.572 [2024-12-05 03:04:51.195720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.178 ms, result 0 00:20:20.572 [2024-12-05 03:04:51.196636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.572 [2024-12-05 03:04:51.207362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.517  [2024-12-05T03:04:53.306Z] Copying: 26/256 [MB] (26 MBps) [2024-12-05T03:04:54.253Z] Copying: 42/256 [MB] (15 MBps) [2024-12-05T03:04:55.638Z] Copying: 64/256 [MB] (22 MBps) [2024-12-05T03:04:56.579Z] Copying: 87/256 [MB] (23 MBps) [2024-12-05T03:04:57.523Z] Copying: 105/256 [MB] (18 MBps) [2024-12-05T03:04:58.465Z] Copying: 124/256 [MB] (19 MBps) [2024-12-05T03:04:59.408Z] Copying: 144/256 [MB] (19 MBps) [2024-12-05T03:05:00.353Z] Copying: 157/256 [MB] (12 MBps) [2024-12-05T03:05:01.297Z] Copying: 175/256 [MB] (18 MBps) [2024-12-05T03:05:02.239Z] Copying: 189/256 [MB] (14 MBps) [2024-12-05T03:05:03.626Z] Copying: 201/256 [MB] (12 MBps) [2024-12-05T03:05:04.571Z] Copying: 213/256 [MB] (11 MBps) [2024-12-05T03:05:05.515Z] Copying: 223/256 [MB] (10 MBps) [2024-12-05T03:05:06.457Z] Copying: 237/256 [MB] (14 MBps) [2024-12-05T03:05:07.029Z] Copying: 249/256 [MB] (12 MBps) [2024-12-05T03:05:07.029Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-05 03:05:06.778933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.185 [2024-12-05 03:05:06.789366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.789415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:36.185 [2024-12-05 03:05:06.789440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.185 [2024-12-05 03:05:06.789449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.789473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:36.185 [2024-12-05 03:05:06.792418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.792600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:36.185 [2024-12-05 03:05:06.792621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.930 ms 00:20:36.185 [2024-12-05 03:05:06.792630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.792901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.792912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:36.185 [2024-12-05 03:05:06.792921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:20:36.185 [2024-12-05 03:05:06.792929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.796654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.796677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:36.185 [2024-12-05 03:05:06.796687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.706 ms 00:20:36.185 [2024-12-05 03:05:06.796696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.803668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.803830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:36.185 [2024-12-05 03:05:06.803850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.953 ms 00:20:36.185 [2024-12-05 03:05:06.803858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.829571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.829620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:36.185 [2024-12-05 03:05:06.829634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.644 ms 00:20:36.185 [2024-12-05 03:05:06.829642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.845941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.846158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:36.185 [2024-12-05 03:05:06.846187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.237 ms 00:20:36.185 [2024-12-05 03:05:06.846195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.846346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.846357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:36.185 [2024-12-05 03:05:06.846375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:36.185 [2024-12-05 03:05:06.846383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.872871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.872914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:36.185 [2024-12-05 03:05:06.872925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.470 ms 00:20:36.185 [2024-12-05 03:05:06.872932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.898166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.898209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:36.185 [2024-12-05 03:05:06.898220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.173 ms 00:20:36.185 [2024-12-05 03:05:06.898228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.923363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.185 [2024-12-05 03:05:06.923409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:36.185 [2024-12-05 03:05:06.923420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.075 ms 00:20:36.185 [2024-12-05 03:05:06.923428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.185 [2024-12-05 03:05:06.947989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.186 [2024-12-05 03:05:06.948034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:36.186 [2024-12-05 03:05:06.948045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.472 ms 00:20:36.186 [2024-12-05 03:05:06.948052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:06.948238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:36.186 [2024-12-05 03:05:06.948276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.948998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:36.186 [2024-12-05 03:05:06.949059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:36.186 [2024-12-05 03:05:06.949067] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:20:36.186 [2024-12-05 03:05:06.949089] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:36.186 [2024-12-05 03:05:06.949096] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:36.186 [2024-12-05 03:05:06.949104] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:36.186 [2024-12-05 03:05:06.949112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:36.186 [2024-12-05 03:05:06.949120] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:36.186 [2024-12-05 03:05:06.949128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:36.186 [2024-12-05 03:05:06.949139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:36.186 [2024-12-05 03:05:06.949147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:36.186 [2024-12-05 03:05:06.949153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:36.186 [2024-12-05 03:05:06.949161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.186 [2024-12-05 03:05:06.949169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:36.186 [2024-12-05 03:05:06.949179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:20:36.186 [2024-12-05 03:05:06.949186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:06.962631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.186 [2024-12-05 03:05:06.962670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:36.186 [2024-12-05 03:05:06.962682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:20:36.186 [2024-12-05 03:05:06.962690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:06.963112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.186 [2024-12-05 03:05:06.963123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:36.186 [2024-12-05 03:05:06.963132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:20:36.186 [2024-12-05 03:05:06.963140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:07.001671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.186 [2024-12-05 03:05:07.001719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.186 [2024-12-05 03:05:07.001731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.186 [2024-12-05 03:05:07.001744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:07.001845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.186 [2024-12-05 03:05:07.001856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.186 [2024-12-05 03:05:07.001865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.186 [2024-12-05 03:05:07.001873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:07.001924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.186 [2024-12-05 03:05:07.001934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.186 [2024-12-05 03:05:07.001942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.186 [2024-12-05 03:05:07.001950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.186 [2024-12-05 03:05:07.001971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.186 [2024-12-05 03:05:07.001979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.186 [2024-12-05 03:05:07.001987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.186 [2024-12-05 03:05:07.001995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.085035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.085118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.447 [2024-12-05 03:05:07.085132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.085141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.153496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.153553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.447 [2024-12-05 03:05:07.153566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.153575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.153650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.153660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.447 [2024-12-05 03:05:07.153669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.153678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.153711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.153727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.447 [2024-12-05 03:05:07.153736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.153744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.153847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.153858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.447 [2024-12-05 03:05:07.153867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.153875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.153910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.153920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:36.447 [2024-12-05 03:05:07.153931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.153940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.447 [2024-12-05 03:05:07.153981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.447 [2024-12-05 03:05:07.153991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.447 [2024-12-05 03:05:07.153999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.447 [2024-12-05 03:05:07.154008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.448 [2024-12-05 03:05:07.154057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.448 [2024-12-05 03:05:07.154102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.448 [2024-12-05 03:05:07.154113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.448 [2024-12-05 03:05:07.154121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.448 [2024-12-05 03:05:07.154278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.902 ms, result 0 00:20:37.018 00:20:37.018 00:20:37.018 03:05:07 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:37.018 03:05:07 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:37.592 03:05:08 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:37.592 [2024-12-05 03:05:08.409160] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:37.592 [2024-12-05 03:05:08.409301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76908 ] 00:20:37.853 [2024-12-05 03:05:08.571444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.853 [2024-12-05 03:05:08.655772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.114 [2024-12-05 03:05:08.864980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.114 [2024-12-05 03:05:08.865029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.377 [2024-12-05 03:05:09.012963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.377 [2024-12-05 03:05:09.012999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:38.377 [2024-12-05 03:05:09.013009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:38.377 [2024-12-05 03:05:09.013016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.377 [2024-12-05 03:05:09.015084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.377 [2024-12-05 03:05:09.015112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.377 [2024-12-05 03:05:09.015119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:20:38.377 [2024-12-05 03:05:09.015125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.377 [2024-12-05 03:05:09.015180] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:38.377 [2024-12-05 03:05:09.015729] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:38.377 [2024-12-05 03:05:09.015748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.377 [2024-12-05 03:05:09.015754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.377 [2024-12-05 03:05:09.015761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:38.377 [2024-12-05 03:05:09.015767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.377 [2024-12-05 03:05:09.016807] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:38.377 [2024-12-05 03:05:09.026172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.377 [2024-12-05 03:05:09.026324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:38.377 [2024-12-05 03:05:09.026337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.366 ms 00:20:38.377 [2024-12-05 03:05:09.026344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.377 [2024-12-05 03:05:09.026406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.377 [2024-12-05 03:05:09.026415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:38.378 [2024-12-05 03:05:09.026422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:38.378 [2024-12-05 03:05:09.026428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.030649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.030673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.378 [2024-12-05 03:05:09.030681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:20:38.378 [2024-12-05 03:05:09.030688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.030760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.030768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.378 [2024-12-05 03:05:09.030774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:38.378 [2024-12-05 03:05:09.030780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.030797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.030803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:38.378 [2024-12-05 03:05:09.030809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:38.378 [2024-12-05 03:05:09.030815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.030831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:38.378 [2024-12-05 03:05:09.033565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.033667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.378 [2024-12-05 03:05:09.033678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:20:38.378 [2024-12-05 03:05:09.033684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.033715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.033722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:38.378 [2024-12-05 03:05:09.033728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:38.378 [2024-12-05 03:05:09.033733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.033748] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:38.378 [2024-12-05 03:05:09.033763] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:38.378 [2024-12-05 03:05:09.033789] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:38.378 [2024-12-05 03:05:09.033801] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:38.378 [2024-12-05 03:05:09.033879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:38.378 [2024-12-05 03:05:09.033887] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:38.378 [2024-12-05 03:05:09.033895] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:38.378 [2024-12-05 03:05:09.033904] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:38.378 [2024-12-05 03:05:09.033911] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:38.378 [2024-12-05 03:05:09.033917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:38.378 [2024-12-05 03:05:09.033923] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:38.378 [2024-12-05 03:05:09.033929] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:38.378 [2024-12-05 03:05:09.033934] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:38.378 [2024-12-05 03:05:09.033940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.033945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:38.378 [2024-12-05 03:05:09.033952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:38.378 [2024-12-05 03:05:09.033957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.034024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.378 [2024-12-05 03:05:09.034032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:38.378 [2024-12-05 03:05:09.034037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:38.378 [2024-12-05 03:05:09.034043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.378 [2024-12-05 03:05:09.034133] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:38.378 [2024-12-05 03:05:09.034141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:38.378 [2024-12-05 03:05:09.034147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:38.378 [2024-12-05 03:05:09.034164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:38.378 [2024-12-05 03:05:09.034180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.378 [2024-12-05 03:05:09.034191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:38.378 [2024-12-05 03:05:09.034200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:38.378 [2024-12-05 03:05:09.034206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.378 [2024-12-05 03:05:09.034211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:38.378 [2024-12-05 03:05:09.034216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:38.378 [2024-12-05 03:05:09.034222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:38.378 [2024-12-05 03:05:09.034233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:38.378 [2024-12-05 03:05:09.034249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:38.378 [2024-12-05 03:05:09.034265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:38.378 [2024-12-05 03:05:09.034280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:38.378 [2024-12-05 03:05:09.034295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:38.378 [2024-12-05 03:05:09.034310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.378 [2024-12-05 03:05:09.034320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:38.378 [2024-12-05 03:05:09.034325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:38.378 [2024-12-05 03:05:09.034330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.378 [2024-12-05 03:05:09.034335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:38.378 [2024-12-05 03:05:09.034340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:38.378 [2024-12-05 03:05:09.034345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:38.378 [2024-12-05 03:05:09.034355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:38.378 [2024-12-05 03:05:09.034360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034366] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:38.378 [2024-12-05 03:05:09.034373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:38.378 [2024-12-05 03:05:09.034380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.378 [2024-12-05 03:05:09.034391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:38.378 [2024-12-05 03:05:09.034397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:38.378 [2024-12-05 03:05:09.034402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:38.378 [2024-12-05 03:05:09.034408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:38.378 [2024-12-05 03:05:09.034413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:38.378 [2024-12-05 03:05:09.034418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:38.378 [2024-12-05 03:05:09.034424] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:38.378 [2024-12-05 03:05:09.034431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.378 [2024-12-05 03:05:09.034437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:38.378 [2024-12-05 03:05:09.034443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:38.378 [2024-12-05 03:05:09.034448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:38.378 [2024-12-05 03:05:09.034453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:38.379 [2024-12-05 03:05:09.034459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:38.379 [2024-12-05 03:05:09.034464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:38.379 [2024-12-05 03:05:09.034470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:38.379 [2024-12-05 03:05:09.034475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:38.379 [2024-12-05 03:05:09.034480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:38.379 [2024-12-05 03:05:09.034486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:38.379 [2024-12-05 03:05:09.034491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:38.379 [2024-12-05 03:05:09.034496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:38.379 [2024-12-05 03:05:09.034502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:38.379 [2024-12-05 03:05:09.034507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:38.379 [2024-12-05 03:05:09.034512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:38.379 [2024-12-05 03:05:09.034519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.379 [2024-12-05 03:05:09.034525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:38.379 [2024-12-05 03:05:09.034531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:38.379 [2024-12-05 03:05:09.034536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:38.379 [2024-12-05 03:05:09.034541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:38.379 [2024-12-05 03:05:09.034547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.034556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:38.379 [2024-12-05 03:05:09.034561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:20:38.379 [2024-12-05 03:05:09.034567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.055100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.055126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.379 [2024-12-05 03:05:09.055134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.492 ms 00:20:38.379 [2024-12-05 03:05:09.055140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.055233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.055241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:38.379 [2024-12-05 03:05:09.055248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:38.379 [2024-12-05 03:05:09.055253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.092295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.092410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.379 [2024-12-05 03:05:09.092428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.026 ms 00:20:38.379 [2024-12-05 03:05:09.092434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.092492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.092501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.379 [2024-12-05 03:05:09.092508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:38.379 [2024-12-05 03:05:09.092513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.092796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.092808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.379 [2024-12-05 03:05:09.092815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:38.379 [2024-12-05 03:05:09.092824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.092926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.092934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.379 [2024-12-05 03:05:09.092940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:38.379 [2024-12-05 03:05:09.092945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.103618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.103712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.379 [2024-12-05 03:05:09.103724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.658 ms 00:20:38.379 [2024-12-05 03:05:09.103730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.113415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:38.379 [2024-12-05 03:05:09.113511] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:38.379 [2024-12-05 03:05:09.113562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.113578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:38.379 [2024-12-05 03:05:09.113593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.758 ms 00:20:38.379 [2024-12-05 03:05:09.113607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.132132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.132221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:38.379 [2024-12-05 03:05:09.132274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.475 ms 00:20:38.379 [2024-12-05 03:05:09.132293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.141128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.141213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:38.379 [2024-12-05 03:05:09.141253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.785 ms 00:20:38.379 [2024-12-05 03:05:09.141269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.150007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.150105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:38.379 [2024-12-05 03:05:09.150149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.692 ms 00:20:38.379 [2024-12-05 03:05:09.150166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.150628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.150699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:38.379 [2024-12-05 03:05:09.150736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:38.379 [2024-12-05 03:05:09.150753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.194796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.194908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:38.379 [2024-12-05 03:05:09.194948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.014 ms 00:20:38.379 [2024-12-05 03:05:09.194966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.202946] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:38.379 [2024-12-05 03:05:09.214230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.214323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:38.379 [2024-12-05 03:05:09.214359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.203 ms 00:20:38.379 [2024-12-05 03:05:09.214380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.214455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.214475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:38.379 [2024-12-05 03:05:09.214491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:38.379 [2024-12-05 03:05:09.214506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.214550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.214567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:38.379 [2024-12-05 03:05:09.214582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:38.379 [2024-12-05 03:05:09.214638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.214676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.214694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:38.379 [2024-12-05 03:05:09.214708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:38.379 [2024-12-05 03:05:09.214723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-12-05 03:05:09.214756] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:38.379 [2024-12-05 03:05:09.214774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-12-05 03:05:09.214788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:38.379 [2024-12-05 03:05:09.215019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:38.379 [2024-12-05 03:05:09.215033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.232619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.232706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:38.641 [2024-12-05 03:05:09.232744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.560 ms 00:20:38.641 [2024-12-05 03:05:09.232760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.232831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.232850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:38.641 [2024-12-05 03:05:09.232866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:38.641 [2024-12-05 03:05:09.232880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.233715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.641 [2024-12-05 03:05:09.236093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.528 ms, result 0 00:20:38.641 [2024-12-05 03:05:09.236854] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.641 [2024-12-05 03:05:09.247863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.641  [2024-12-05T03:05:09.485Z] Copying: 4096/4096 [kB] (average 33 MBps)[2024-12-05 03:05:09.369639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.641 [2024-12-05 03:05:09.376146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.376232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:38.641 [2024-12-05 03:05:09.376286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:38.641 [2024-12-05 03:05:09.376305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.376332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:38.641 [2024-12-05 03:05:09.378461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.378539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:38.641 [2024-12-05 03:05:09.378581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:20:38.641 [2024-12-05 03:05:09.378598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.380274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.380347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:38.641 [2024-12-05 03:05:09.380385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:20:38.641 [2024-12-05 03:05:09.380401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.383492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.383558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:38.641 [2024-12-05 03:05:09.383593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:20:38.641 [2024-12-05 03:05:09.383610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.388858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.388933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:38.641 [2024-12-05 03:05:09.388974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:20:38.641 [2024-12-05 03:05:09.388990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.406207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.406294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:38.641 [2024-12-05 03:05:09.406337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.167 ms 00:20:38.641 [2024-12-05 03:05:09.406354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.417731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.417817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:38.641 [2024-12-05 03:05:09.417857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.344 ms 00:20:38.641 [2024-12-05 03:05:09.417874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.417979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.417997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:38.641 [2024-12-05 03:05:09.418018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:38.641 [2024-12-05 03:05:09.418032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.435584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.435663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:38.641 [2024-12-05 03:05:09.435700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.531 ms 00:20:38.641 [2024-12-05 03:05:09.435715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.453051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.453144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:38.641 [2024-12-05 03:05:09.453181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.298 ms 00:20:38.641 [2024-12-05 03:05:09.453197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.641 [2024-12-05 03:05:09.470215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.641 [2024-12-05 03:05:09.470292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.641 [2024-12-05 03:05:09.470328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.987 ms 00:20:38.641 [2024-12-05 03:05:09.470343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.904 [2024-12-05 03:05:09.486902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.904 [2024-12-05 03:05:09.486983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.904 [2024-12-05 03:05:09.487019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.512 ms 00:20:38.904 [2024-12-05 03:05:09.487034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.904 [2024-12-05 03:05:09.487064] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.904 [2024-12-05 03:05:09.487099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.904 [2024-12-05 03:05:09.487933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.487996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.905 [2024-12-05 03:05:09.488102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.905 [2024-12-05 03:05:09.488108] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:20:38.905 [2024-12-05 03:05:09.488114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:38.905 [2024-12-05 03:05:09.488119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:38.905 [2024-12-05 03:05:09.488125] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:38.905 [2024-12-05 03:05:09.488131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:38.905 [2024-12-05 03:05:09.488136] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.905 [2024-12-05 03:05:09.488143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.905 [2024-12-05 03:05:09.488151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.905 [2024-12-05 03:05:09.488156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.905 [2024-12-05 03:05:09.488163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.905 [2024-12-05 03:05:09.488169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.905 [2024-12-05 03:05:09.488174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.905 [2024-12-05 03:05:09.488181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:20:38.905 [2024-12-05 03:05:09.488186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.497604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.905 [2024-12-05 03:05:09.497688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.905 [2024-12-05 03:05:09.497728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.402 ms 00:20:38.905 [2024-12-05 03:05:09.497745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.498030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.905 [2024-12-05 03:05:09.498118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.905 [2024-12-05 03:05:09.498159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:20:38.905 [2024-12-05 03:05:09.498176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.525709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.525793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.905 [2024-12-05 03:05:09.525831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.525851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.525918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.525936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.905 [2024-12-05 03:05:09.525951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.525965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.526006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.526024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.905 [2024-12-05 03:05:09.526038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.526100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.526146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.526164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.905 [2024-12-05 03:05:09.526254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.526270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.584436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.584549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.905 [2024-12-05 03:05:09.584587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.584609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.632941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.905 [2024-12-05 03:05:09.633102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.905 [2024-12-05 03:05:09.633243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.905 [2024-12-05 03:05:09.633325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.905 [2024-12-05 03:05:09.633520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.905 [2024-12-05 03:05:09.633638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.905 [2024-12-05 03:05:09.633742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.905 [2024-12-05 03:05:09.633850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.905 [2024-12-05 03:05:09.633865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.905 [2024-12-05 03:05:09.633879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.905 [2024-12-05 03:05:09.633991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.823 ms, result 0 00:20:39.478 00:20:39.478 00:20:39.478 03:05:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76933 00:20:39.478 03:05:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:39.478 03:05:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76933 00:20:39.478 03:05:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76933 ']' 00:20:39.478 03:05:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.478 03:05:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.478 03:05:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.478 03:05:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.478 03:05:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:39.478 [2024-12-05 03:05:10.302093] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:39.478 [2024-12-05 03:05:10.302884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76933 ] 00:20:39.740 [2024-12-05 03:05:10.474980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.740 [2024-12-05 03:05:10.551812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.313 03:05:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.313 03:05:11 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:40.313 03:05:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:40.574 [2024-12-05 03:05:11.352409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.574 [2024-12-05 03:05:11.352459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.837 [2024-12-05 03:05:11.520555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.520589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.837 [2024-12-05 03:05:11.520601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:40.837 [2024-12-05 03:05:11.520607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.522669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.522797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.837 [2024-12-05 03:05:11.522814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:20:40.837 [2024-12-05 03:05:11.522820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.522876] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.837 [2024-12-05 03:05:11.523404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.837 [2024-12-05 03:05:11.523422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.523429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.837 [2024-12-05 03:05:11.523436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:40.837 [2024-12-05 03:05:11.523443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.524422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.837 [2024-12-05 03:05:11.534106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.534138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.837 [2024-12-05 03:05:11.534147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.687 ms 00:20:40.837 [2024-12-05 03:05:11.534156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.534227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.534237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.837 [2024-12-05 03:05:11.534243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:40.837 [2024-12-05 03:05:11.534250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.538486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.538515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.837 [2024-12-05 03:05:11.538523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:20:40.837 [2024-12-05 03:05:11.538530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.538601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.538611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.837 [2024-12-05 03:05:11.538617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:40.837 [2024-12-05 03:05:11.538627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.538646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.538653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.837 [2024-12-05 03:05:11.538659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.837 [2024-12-05 03:05:11.538666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.538682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:40.837 [2024-12-05 03:05:11.541378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.541487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.837 [2024-12-05 03:05:11.541502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:20:40.837 [2024-12-05 03:05:11.541508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.541539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.541545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.837 [2024-12-05 03:05:11.541553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:40.837 [2024-12-05 03:05:11.541560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.541575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.837 [2024-12-05 03:05:11.541589] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.837 [2024-12-05 03:05:11.541622] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.837 [2024-12-05 03:05:11.541634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.837 [2024-12-05 03:05:11.541713] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.837 [2024-12-05 03:05:11.541722] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.837 [2024-12-05 03:05:11.541732] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.837 [2024-12-05 03:05:11.541740] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.837 [2024-12-05 03:05:11.541748] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.837 [2024-12-05 03:05:11.541754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:40.837 [2024-12-05 03:05:11.541761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.837 [2024-12-05 03:05:11.541766] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.837 [2024-12-05 03:05:11.541774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.837 [2024-12-05 03:05:11.541779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.541787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.837 [2024-12-05 03:05:11.541793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:20:40.837 [2024-12-05 03:05:11.541800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.541866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.837 [2024-12-05 03:05:11.541873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.837 [2024-12-05 03:05:11.541879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:40.837 [2024-12-05 03:05:11.541886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.837 [2024-12-05 03:05:11.541961] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.837 [2024-12-05 03:05:11.541969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.837 [2024-12-05 03:05:11.541975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.837 [2024-12-05 03:05:11.541982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.837 [2024-12-05 03:05:11.541988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.837 [2024-12-05 03:05:11.541995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:40.837 [2024-12-05 03:05:11.542008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.837 [2024-12-05 03:05:11.542014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.837 [2024-12-05 03:05:11.542025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.837 [2024-12-05 03:05:11.542031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:40.837 [2024-12-05 03:05:11.542035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.837 [2024-12-05 03:05:11.542042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.837 [2024-12-05 03:05:11.542047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:40.837 [2024-12-05 03:05:11.542053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.837 [2024-12-05 03:05:11.542066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:40.837 [2024-12-05 03:05:11.542092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.837 [2024-12-05 03:05:11.542104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.837 [2024-12-05 03:05:11.542115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.837 [2024-12-05 03:05:11.542123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.837 [2024-12-05 03:05:11.542134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.837 [2024-12-05 03:05:11.542139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.837 [2024-12-05 03:05:11.542150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.837 [2024-12-05 03:05:11.542158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.837 [2024-12-05 03:05:11.542169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.837 [2024-12-05 03:05:11.542174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:40.837 [2024-12-05 03:05:11.542180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.837 [2024-12-05 03:05:11.542185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.838 [2024-12-05 03:05:11.542191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:40.838 [2024-12-05 03:05:11.542196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.838 [2024-12-05 03:05:11.542202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.838 [2024-12-05 03:05:11.542208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:40.838 [2024-12-05 03:05:11.542215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.838 [2024-12-05 03:05:11.542220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.838 [2024-12-05 03:05:11.542226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:40.838 [2024-12-05 03:05:11.542231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.838 [2024-12-05 03:05:11.542237] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.838 [2024-12-05 03:05:11.542245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.838 [2024-12-05 03:05:11.542251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.838 [2024-12-05 03:05:11.542257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.838 [2024-12-05 03:05:11.542263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.838 [2024-12-05 03:05:11.542269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.838 [2024-12-05 03:05:11.542276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.838 [2024-12-05 03:05:11.542281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.838 [2024-12-05 03:05:11.542287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.838 [2024-12-05 03:05:11.542295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.838 [2024-12-05 03:05:11.542302] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.838 [2024-12-05 03:05:11.542309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:40.838 [2024-12-05 03:05:11.542324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:40.838 [2024-12-05 03:05:11.542331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:40.838 [2024-12-05 03:05:11.542336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:40.838 [2024-12-05 03:05:11.542343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:40.838 [2024-12-05 03:05:11.542348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:40.838 [2024-12-05 03:05:11.542354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:40.838 [2024-12-05 03:05:11.542360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:40.838 [2024-12-05 03:05:11.542366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:40.838 [2024-12-05 03:05:11.542372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:40.838 [2024-12-05 03:05:11.542401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.838 [2024-12-05 03:05:11.542408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.838 [2024-12-05 03:05:11.542421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.838 [2024-12-05 03:05:11.542428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.838 [2024-12-05 03:05:11.542434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.838 [2024-12-05 03:05:11.542440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.542446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.838 [2024-12-05 03:05:11.542453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:20:40.838 [2024-12-05 03:05:11.542459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.562971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.562997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.838 [2024-12-05 03:05:11.563007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.467 ms 00:20:40.838 [2024-12-05 03:05:11.563014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.563117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.563126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.838 [2024-12-05 03:05:11.563133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:40.838 [2024-12-05 03:05:11.563139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.586822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.586850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.838 [2024-12-05 03:05:11.586859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.664 ms 00:20:40.838 [2024-12-05 03:05:11.586866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.586909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.586916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.838 [2024-12-05 03:05:11.586924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:40.838 [2024-12-05 03:05:11.586929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.587225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.587236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.838 [2024-12-05 03:05:11.587246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:40.838 [2024-12-05 03:05:11.587252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.587350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.587357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.838 [2024-12-05 03:05:11.587364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:40.838 [2024-12-05 03:05:11.587370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.598817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.598842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.838 [2024-12-05 03:05:11.598852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.431 ms 00:20:40.838 [2024-12-05 03:05:11.598858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.626076] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:40.838 [2024-12-05 03:05:11.626106] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.838 [2024-12-05 03:05:11.626118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.626125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.838 [2024-12-05 03:05:11.626133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.187 ms 00:20:40.838 [2024-12-05 03:05:11.626143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.644598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.644630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.838 [2024-12-05 03:05:11.644643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.396 ms 00:20:40.838 [2024-12-05 03:05:11.644649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.653891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.653997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.838 [2024-12-05 03:05:11.654014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.176 ms 00:20:40.838 [2024-12-05 03:05:11.654020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.662511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.662534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.838 [2024-12-05 03:05:11.662543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.449 ms 00:20:40.838 [2024-12-05 03:05:11.662549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.838 [2024-12-05 03:05:11.663005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.838 [2024-12-05 03:05:11.663024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.838 [2024-12-05 03:05:11.663033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:40.838 [2024-12-05 03:05:11.663039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.099 [2024-12-05 03:05:11.706171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.099 [2024-12-05 03:05:11.706203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.099 [2024-12-05 03:05:11.706214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.113 ms 00:20:41.099 [2024-12-05 03:05:11.706220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.099 [2024-12-05 03:05:11.714167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.099 [2024-12-05 03:05:11.725351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.099 [2024-12-05 03:05:11.725381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.099 [2024-12-05 03:05:11.725392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.077 ms 00:20:41.099 [2024-12-05 03:05:11.725400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.099 [2024-12-05 03:05:11.725466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.099 [2024-12-05 03:05:11.725475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.099 [2024-12-05 03:05:11.725481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:41.099 [2024-12-05 03:05:11.725489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.099 [2024-12-05 03:05:11.725526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.099 [2024-12-05 03:05:11.725534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.099 [2024-12-05 03:05:11.725540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:41.099 [2024-12-05 03:05:11.725549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.099 [2024-12-05 03:05:11.725566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.099 [2024-12-05 03:05:11.725573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.099 [2024-12-05 03:05:11.725579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:41.099 [2024-12-05 03:05:11.725587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.100 [2024-12-05 03:05:11.725609] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.100 [2024-12-05 03:05:11.725619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.100 [2024-12-05 03:05:11.725626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.100 [2024-12-05 03:05:11.725633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:41.100 [2024-12-05 03:05:11.725638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.100 [2024-12-05 03:05:11.742953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.100 [2024-12-05 03:05:11.742978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.100 [2024-12-05 03:05:11.742987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.292 ms 00:20:41.100 [2024-12-05 03:05:11.742994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.100 [2024-12-05 03:05:11.743059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.100 [2024-12-05 03:05:11.743066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.100 [2024-12-05 03:05:11.743088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:41.100 [2024-12-05 03:05:11.743096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.100 [2024-12-05 03:05:11.743705] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.100 [2024-12-05 03:05:11.745994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.936 ms, result 0 00:20:41.100 [2024-12-05 03:05:11.747039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.100 Some configs were skipped because the RPC state that can call them passed over. 00:20:41.100 03:05:11 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:41.361 [2024-12-05 03:05:11.967348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.361 [2024-12-05 03:05:11.967460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:41.361 [2024-12-05 03:05:11.967506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:20:41.361 [2024-12-05 03:05:11.967526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.361 [2024-12-05 03:05:11.967564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.843 ms, result 0 00:20:41.361 true 00:20:41.361 03:05:11 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:41.361 [2024-12-05 03:05:12.171029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.361 [2024-12-05 03:05:12.171146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:41.361 [2024-12-05 03:05:12.171194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:20:41.361 [2024-12-05 03:05:12.171211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.361 [2024-12-05 03:05:12.171254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.344 ms, result 0 00:20:41.361 true 00:20:41.361 03:05:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76933 00:20:41.361 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76933 ']' 00:20:41.361 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76933 00:20:41.361 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:41.361 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.361 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76933 00:20:41.668 killing process with pid 76933 00:20:41.668 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.668 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.668 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76933' 00:20:41.668 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76933 00:20:41.668 03:05:12 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76933 00:20:41.941 [2024-12-05 03:05:12.737357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.941 [2024-12-05 03:05:12.737406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:41.941 [2024-12-05 03:05:12.737416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:41.941 [2024-12-05 03:05:12.737423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.941 [2024-12-05 03:05:12.737442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:41.941 [2024-12-05 03:05:12.739542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.941 [2024-12-05 03:05:12.739574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:41.941 [2024-12-05 03:05:12.739585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:20:41.941 [2024-12-05 03:05:12.739591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.941 [2024-12-05 03:05:12.739812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.941 [2024-12-05 03:05:12.739820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:41.941 [2024-12-05 03:05:12.739828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:41.941 [2024-12-05 03:05:12.739833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.941 [2024-12-05 03:05:12.743069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.941 [2024-12-05 03:05:12.743101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:41.941 [2024-12-05 03:05:12.743111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:20:41.941 [2024-12-05 03:05:12.743117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-12-05 03:05:12.748402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-12-05 03:05:12.748539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:41.942 [2024-12-05 03:05:12.748556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:20:41.942 [2024-12-05 03:05:12.748562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-12-05 03:05:12.755926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-12-05 03:05:12.756027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:41.942 [2024-12-05 03:05:12.756042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.322 ms 00:20:41.942 [2024-12-05 03:05:12.756048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-12-05 03:05:12.762784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-12-05 03:05:12.762880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:41.942 [2024-12-05 03:05:12.762894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.698 ms 00:20:41.942 [2024-12-05 03:05:12.762900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-12-05 03:05:12.763005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-12-05 03:05:12.763013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:41.942 [2024-12-05 03:05:12.763021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:41.942 [2024-12-05 03:05:12.763026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-12-05 03:05:12.771040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-12-05 03:05:12.771064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:41.942 [2024-12-05 03:05:12.771083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.996 ms 00:20:41.942 [2024-12-05 03:05:12.771090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.942 [2024-12-05 03:05:12.778361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.942 [2024-12-05 03:05:12.778385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:41.942 [2024-12-05 03:05:12.778397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.241 ms 00:20:41.942 [2024-12-05 03:05:12.778402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.203 [2024-12-05 03:05:12.785495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.203 [2024-12-05 03:05:12.785586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.203 [2024-12-05 03:05:12.785599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.055 ms 00:20:42.203 [2024-12-05 03:05:12.785605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.203 [2024-12-05 03:05:12.792452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.203 [2024-12-05 03:05:12.792542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.203 [2024-12-05 03:05:12.792554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.798 ms 00:20:42.203 [2024-12-05 03:05:12.792559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.203 [2024-12-05 03:05:12.792584] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.203 [2024-12-05 03:05:12.792595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.203 [2024-12-05 03:05:12.792655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.792994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.204 [2024-12-05 03:05:12.793253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.205 [2024-12-05 03:05:12.793260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.205 [2024-12-05 03:05:12.793276] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.205 [2024-12-05 03:05:12.793286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:20:42.205 [2024-12-05 03:05:12.793294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.205 [2024-12-05 03:05:12.793301] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.205 [2024-12-05 03:05:12.793306] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.205 [2024-12-05 03:05:12.793313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.205 [2024-12-05 03:05:12.793318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.205 [2024-12-05 03:05:12.793326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.205 [2024-12-05 03:05:12.793331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.205 [2024-12-05 03:05:12.793337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.205 [2024-12-05 03:05:12.793342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.205 [2024-12-05 03:05:12.793349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.205 [2024-12-05 03:05:12.793355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.205 [2024-12-05 03:05:12.793362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:20:42.205 [2024-12-05 03:05:12.793368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.802875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.205 [2024-12-05 03:05:12.802897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.205 [2024-12-05 03:05:12.802907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.488 ms 00:20:42.205 [2024-12-05 03:05:12.802913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.803213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.205 [2024-12-05 03:05:12.803222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.205 [2024-12-05 03:05:12.803231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:42.205 [2024-12-05 03:05:12.803237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.838017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.838044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.205 [2024-12-05 03:05:12.838054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.838060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.838142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.838150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.205 [2024-12-05 03:05:12.838159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.838165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.838200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.838207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.205 [2024-12-05 03:05:12.838217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.838222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.838236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.838242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.205 [2024-12-05 03:05:12.838249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.838256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.898366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.898493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.205 [2024-12-05 03:05:12.898507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.898514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.205 [2024-12-05 03:05:12.947209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.205 [2024-12-05 03:05:12.947295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.205 [2024-12-05 03:05:12.947339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.205 [2024-12-05 03:05:12.947429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.205 [2024-12-05 03:05:12.947476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.205 [2024-12-05 03:05:12.947529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.205 [2024-12-05 03:05:12.947577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.205 [2024-12-05 03:05:12.947584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.205 [2024-12-05 03:05:12.947590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.205 [2024-12-05 03:05:12.947695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.319 ms, result 0 00:20:42.778 03:05:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.778 [2024-12-05 03:05:13.533898] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:42.778 [2024-12-05 03:05:13.534007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76980 ] 00:20:43.039 [2024-12-05 03:05:13.689870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.039 [2024-12-05 03:05:13.771006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.301 [2024-12-05 03:05:13.979288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.301 [2024-12-05 03:05:13.979491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.301 [2024-12-05 03:05:14.131682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.301 [2024-12-05 03:05:14.131715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:43.301 [2024-12-05 03:05:14.131725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.301 [2024-12-05 03:05:14.131732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.301 [2024-12-05 03:05:14.133800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.301 [2024-12-05 03:05:14.133829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.301 [2024-12-05 03:05:14.133837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:20:43.301 [2024-12-05 03:05:14.133843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.301 [2024-12-05 03:05:14.133897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:43.301 [2024-12-05 03:05:14.134490] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:43.301 [2024-12-05 03:05:14.134507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.301 [2024-12-05 03:05:14.134514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.301 [2024-12-05 03:05:14.134520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:20:43.301 [2024-12-05 03:05:14.134526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.301 [2024-12-05 03:05:14.135569] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:43.564 [2024-12-05 03:05:14.145052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.564 [2024-12-05 03:05:14.145091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:43.564 [2024-12-05 03:05:14.145100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.483 ms 00:20:43.564 [2024-12-05 03:05:14.145106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.564 [2024-12-05 03:05:14.145175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.564 [2024-12-05 03:05:14.145185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:43.564 [2024-12-05 03:05:14.145191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:43.564 [2024-12-05 03:05:14.145197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.564 [2024-12-05 03:05:14.149444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.564 [2024-12-05 03:05:14.149557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.564 [2024-12-05 03:05:14.149568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.218 ms 00:20:43.564 [2024-12-05 03:05:14.149575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.564 [2024-12-05 03:05:14.149644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.564 [2024-12-05 03:05:14.149652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.564 [2024-12-05 03:05:14.149659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:43.564 [2024-12-05 03:05:14.149665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.564 [2024-12-05 03:05:14.149682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.564 [2024-12-05 03:05:14.149688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:43.564 [2024-12-05 03:05:14.149694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:43.564 [2024-12-05 03:05:14.149700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.565 [2024-12-05 03:05:14.149716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:43.565 [2024-12-05 03:05:14.152477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.565 [2024-12-05 03:05:14.152500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.565 [2024-12-05 03:05:14.152507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:20:43.565 [2024-12-05 03:05:14.152513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.565 [2024-12-05 03:05:14.152542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.565 [2024-12-05 03:05:14.152549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:43.565 [2024-12-05 03:05:14.152555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:43.565 [2024-12-05 03:05:14.152561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.565 [2024-12-05 03:05:14.152576] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:43.565 [2024-12-05 03:05:14.152590] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:43.565 [2024-12-05 03:05:14.152615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:43.565 [2024-12-05 03:05:14.152626] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:43.565 [2024-12-05 03:05:14.152704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:43.565 [2024-12-05 03:05:14.152712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:43.565 [2024-12-05 03:05:14.152720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:43.565 [2024-12-05 03:05:14.152729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:43.565 [2024-12-05 03:05:14.152735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:43.565 [2024-12-05 03:05:14.152741] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:43.565 [2024-12-05 03:05:14.152747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:43.565 [2024-12-05 03:05:14.152752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:43.565 [2024-12-05 03:05:14.152757] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:43.565 [2024-12-05 03:05:14.152763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.565 [2024-12-05 03:05:14.152768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:43.565 [2024-12-05 03:05:14.152774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:20:43.565 [2024-12-05 03:05:14.152780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.565 [2024-12-05 03:05:14.152846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.565 [2024-12-05 03:05:14.152854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:43.565 [2024-12-05 03:05:14.152859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:43.565 [2024-12-05 03:05:14.152864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.565 [2024-12-05 03:05:14.152938] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:43.565 [2024-12-05 03:05:14.152946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:43.565 [2024-12-05 03:05:14.152952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.565 [2024-12-05 03:05:14.152957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.152963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:43.565 [2024-12-05 03:05:14.152968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.152973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:43.565 [2024-12-05 03:05:14.152979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:43.565 [2024-12-05 03:05:14.152984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:43.565 [2024-12-05 03:05:14.152989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.565 [2024-12-05 03:05:14.152994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:43.565 [2024-12-05 03:05:14.153005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:43.565 [2024-12-05 03:05:14.153010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.565 [2024-12-05 03:05:14.153015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:43.565 [2024-12-05 03:05:14.153020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:43.565 [2024-12-05 03:05:14.153025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:43.565 [2024-12-05 03:05:14.153035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:43.565 [2024-12-05 03:05:14.153051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:43.565 [2024-12-05 03:05:14.153066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:43.565 [2024-12-05 03:05:14.153232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:43.565 [2024-12-05 03:05:14.153275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:43.565 [2024-12-05 03:05:14.153355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.565 [2024-12-05 03:05:14.153386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:43.565 [2024-12-05 03:05:14.153400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:43.565 [2024-12-05 03:05:14.153413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.565 [2024-12-05 03:05:14.153426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:43.565 [2024-12-05 03:05:14.153440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:43.565 [2024-12-05 03:05:14.153454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:43.565 [2024-12-05 03:05:14.153537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:43.565 [2024-12-05 03:05:14.153551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:43.565 [2024-12-05 03:05:14.153580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:43.565 [2024-12-05 03:05:14.153598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.565 [2024-12-05 03:05:14.153707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:43.565 [2024-12-05 03:05:14.153725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:43.565 [2024-12-05 03:05:14.153739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:43.565 [2024-12-05 03:05:14.153779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:43.565 [2024-12-05 03:05:14.153795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:43.565 [2024-12-05 03:05:14.153809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:43.565 [2024-12-05 03:05:14.153842] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:43.565 [2024-12-05 03:05:14.153867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.565 [2024-12-05 03:05:14.153891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:43.565 [2024-12-05 03:05:14.153912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:43.565 [2024-12-05 03:05:14.153963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:43.565 [2024-12-05 03:05:14.153970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:43.565 [2024-12-05 03:05:14.153976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:43.565 [2024-12-05 03:05:14.153982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:43.565 [2024-12-05 03:05:14.153987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:43.565 [2024-12-05 03:05:14.153992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:43.565 [2024-12-05 03:05:14.153998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:43.565 [2024-12-05 03:05:14.154003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:43.565 [2024-12-05 03:05:14.154009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:43.565 [2024-12-05 03:05:14.154014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:43.565 [2024-12-05 03:05:14.154020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:43.566 [2024-12-05 03:05:14.154025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:43.566 [2024-12-05 03:05:14.154030] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:43.566 [2024-12-05 03:05:14.154037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.566 [2024-12-05 03:05:14.154043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:43.566 [2024-12-05 03:05:14.154049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:43.566 [2024-12-05 03:05:14.154054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:43.566 [2024-12-05 03:05:14.154060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:43.566 [2024-12-05 03:05:14.154067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.154085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:43.566 [2024-12-05 03:05:14.154092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:20:43.566 [2024-12-05 03:05:14.154097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.174655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.174681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.566 [2024-12-05 03:05:14.174689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.502 ms 00:20:43.566 [2024-12-05 03:05:14.174695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.174788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.174795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:43.566 [2024-12-05 03:05:14.174802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:43.566 [2024-12-05 03:05:14.174808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.210753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.210783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.566 [2024-12-05 03:05:14.210794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.927 ms 00:20:43.566 [2024-12-05 03:05:14.210801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.210859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.210868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.566 [2024-12-05 03:05:14.210875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:43.566 [2024-12-05 03:05:14.210880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.211180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.211191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.566 [2024-12-05 03:05:14.211198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:43.566 [2024-12-05 03:05:14.211207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.211309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.211316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.566 [2024-12-05 03:05:14.211323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:43.566 [2024-12-05 03:05:14.211328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.221984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.222010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.566 [2024-12-05 03:05:14.222018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.640 ms 00:20:43.566 [2024-12-05 03:05:14.222023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.231779] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:43.566 [2024-12-05 03:05:14.231884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:43.566 [2024-12-05 03:05:14.231895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.231900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:43.566 [2024-12-05 03:05:14.231907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.787 ms 00:20:43.566 [2024-12-05 03:05:14.231912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.250187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.250284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:43.566 [2024-12-05 03:05:14.250296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.234 ms 00:20:43.566 [2024-12-05 03:05:14.250302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.259086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.259112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:43.566 [2024-12-05 03:05:14.259119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.733 ms 00:20:43.566 [2024-12-05 03:05:14.259125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.267896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.267920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:43.566 [2024-12-05 03:05:14.267928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.732 ms 00:20:43.566 [2024-12-05 03:05:14.267933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.268414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.268430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.566 [2024-12-05 03:05:14.268437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:20:43.566 [2024-12-05 03:05:14.268443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.312100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.312132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:43.566 [2024-12-05 03:05:14.312142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.639 ms 00:20:43.566 [2024-12-05 03:05:14.312148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.320017] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:43.566 [2024-12-05 03:05:14.331196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.331223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.566 [2024-12-05 03:05:14.331232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.988 ms 00:20:43.566 [2024-12-05 03:05:14.331241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.331305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.331313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:43.566 [2024-12-05 03:05:14.331320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.566 [2024-12-05 03:05:14.331325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.331360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.331367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.566 [2024-12-05 03:05:14.331373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:43.566 [2024-12-05 03:05:14.331381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.331404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.331411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.566 [2024-12-05 03:05:14.331417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:43.566 [2024-12-05 03:05:14.331423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.331446] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:43.566 [2024-12-05 03:05:14.331453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.331459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:43.566 [2024-12-05 03:05:14.331465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.566 [2024-12-05 03:05:14.331471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.349390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.349496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.566 [2024-12-05 03:05:14.349509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.904 ms 00:20:43.566 [2024-12-05 03:05:14.349515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.349583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.566 [2024-12-05 03:05:14.349591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.566 [2024-12-05 03:05:14.349598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:43.566 [2024-12-05 03:05:14.349603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.566 [2024-12-05 03:05:14.350297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.566 [2024-12-05 03:05:14.352525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 218.384 ms, result 0 00:20:43.566 [2024-12-05 03:05:14.353291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.566 [2024-12-05 03:05:14.367951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.952  [2024-12-05T03:05:16.736Z] Copying: 15/256 [MB] (15 MBps) [2024-12-05T03:05:17.677Z] Copying: 37/256 [MB] (21 MBps) [2024-12-05T03:05:18.621Z] Copying: 51/256 [MB] (14 MBps) [2024-12-05T03:05:19.566Z] Copying: 66/256 [MB] (15 MBps) [2024-12-05T03:05:20.512Z] Copying: 83/256 [MB] (16 MBps) [2024-12-05T03:05:21.457Z] Copying: 103/256 [MB] (20 MBps) [2024-12-05T03:05:22.846Z] Copying: 120/256 [MB] (16 MBps) [2024-12-05T03:05:23.420Z] Copying: 139/256 [MB] (19 MBps) [2024-12-05T03:05:24.808Z] Copying: 159/256 [MB] (19 MBps) [2024-12-05T03:05:25.752Z] Copying: 177/256 [MB] (17 MBps) [2024-12-05T03:05:26.696Z] Copying: 188/256 [MB] (10 MBps) [2024-12-05T03:05:27.660Z] Copying: 200/256 [MB] (12 MBps) [2024-12-05T03:05:28.604Z] Copying: 220/256 [MB] (19 MBps) [2024-12-05T03:05:29.550Z] Copying: 239/256 [MB] (19 MBps) [2024-12-05T03:05:29.550Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-05 03:05:29.441828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.706 [2024-12-05 03:05:29.453731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.706 [2024-12-05 03:05:29.453781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.706 [2024-12-05 03:05:29.453805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.706 [2024-12-05 03:05:29.453814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.706 [2024-12-05 03:05:29.453841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:58.706 [2024-12-05 03:05:29.456868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.706 [2024-12-05 03:05:29.456908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.706 [2024-12-05 03:05:29.456920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:20:58.706 [2024-12-05 03:05:29.456929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.706 [2024-12-05 03:05:29.457239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.706 [2024-12-05 03:05:29.457252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.706 [2024-12-05 03:05:29.457262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:58.706 [2024-12-05 03:05:29.457271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.706 [2024-12-05 03:05:29.461499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.706 [2024-12-05 03:05:29.461535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.706 [2024-12-05 03:05:29.461546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:20:58.706 [2024-12-05 03:05:29.461554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.706 [2024-12-05 03:05:29.468639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.706 [2024-12-05 03:05:29.468676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.706 [2024-12-05 03:05:29.468687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.058 ms 00:20:58.706 [2024-12-05 03:05:29.468695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.706 [2024-12-05 03:05:29.493660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.706 [2024-12-05 03:05:29.493708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.706 [2024-12-05 03:05:29.493721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.892 ms 00:20:58.706 [2024-12-05 03:05:29.493729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.706 [2024-12-05 03:05:29.510131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.707 [2024-12-05 03:05:29.510178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.707 [2024-12-05 03:05:29.510197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.348 ms 00:20:58.707 [2024-12-05 03:05:29.510205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.707 [2024-12-05 03:05:29.510369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.707 [2024-12-05 03:05:29.510382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.707 [2024-12-05 03:05:29.510400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:58.707 [2024-12-05 03:05:29.510408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.707 [2024-12-05 03:05:29.535749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.707 [2024-12-05 03:05:29.535791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.707 [2024-12-05 03:05:29.535803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.324 ms 00:20:58.707 [2024-12-05 03:05:29.535811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.969 [2024-12-05 03:05:29.560454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.969 [2024-12-05 03:05:29.560499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.969 [2024-12-05 03:05:29.560512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.593 ms 00:20:58.969 [2024-12-05 03:05:29.560519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.969 [2024-12-05 03:05:29.584506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.969 [2024-12-05 03:05:29.584548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.969 [2024-12-05 03:05:29.584560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:20:58.969 [2024-12-05 03:05:29.584568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.969 [2024-12-05 03:05:29.609281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.969 [2024-12-05 03:05:29.609331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.969 [2024-12-05 03:05:29.609344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.631 ms 00:20:58.969 [2024-12-05 03:05:29.609352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.969 [2024-12-05 03:05:29.609400] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.969 [2024-12-05 03:05:29.609416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.969 [2024-12-05 03:05:29.609788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.609993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.970 [2024-12-05 03:05:29.610249] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.970 [2024-12-05 03:05:29.610258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 39a924c5-8443-42fa-9f63-e8e457595e05 00:20:58.970 [2024-12-05 03:05:29.610267] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.970 [2024-12-05 03:05:29.610275] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.970 [2024-12-05 03:05:29.610283] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.970 [2024-12-05 03:05:29.610292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.970 [2024-12-05 03:05:29.610299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.970 [2024-12-05 03:05:29.610307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.970 [2024-12-05 03:05:29.610318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.970 [2024-12-05 03:05:29.610325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.970 [2024-12-05 03:05:29.610331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.970 [2024-12-05 03:05:29.610339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.970 [2024-12-05 03:05:29.610347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.970 [2024-12-05 03:05:29.610357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:20:58.970 [2024-12-05 03:05:29.610365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.970 [2024-12-05 03:05:29.624162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.970 [2024-12-05 03:05:29.624463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.970 [2024-12-05 03:05:29.624485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.760 ms 00:20:58.970 [2024-12-05 03:05:29.624494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.971 [2024-12-05 03:05:29.624901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.971 [2024-12-05 03:05:29.624912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.971 [2024-12-05 03:05:29.624921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:58.971 [2024-12-05 03:05:29.624929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.971 [2024-12-05 03:05:29.663895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.971 [2024-12-05 03:05:29.664099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.971 [2024-12-05 03:05:29.664118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.971 [2024-12-05 03:05:29.664134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.971 [2024-12-05 03:05:29.664232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.971 [2024-12-05 03:05:29.664242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.971 [2024-12-05 03:05:29.664251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.971 [2024-12-05 03:05:29.664271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.971 [2024-12-05 03:05:29.664324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.971 [2024-12-05 03:05:29.664334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.971 [2024-12-05 03:05:29.664343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.971 [2024-12-05 03:05:29.664350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.971 [2024-12-05 03:05:29.664373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.971 [2024-12-05 03:05:29.664381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.971 [2024-12-05 03:05:29.664390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.971 [2024-12-05 03:05:29.664397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.971 [2024-12-05 03:05:29.749690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.971 [2024-12-05 03:05:29.749750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.971 [2024-12-05 03:05:29.749764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.971 [2024-12-05 03:05:29.749773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.232 [2024-12-05 03:05:29.819127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.232 [2024-12-05 03:05:29.819466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.232 [2024-12-05 03:05:29.819491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.232 [2024-12-05 03:05:29.819501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.232 [2024-12-05 03:05:29.819591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.232 [2024-12-05 03:05:29.819602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.232 [2024-12-05 03:05:29.819612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.232 [2024-12-05 03:05:29.819620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.232 [2024-12-05 03:05:29.819652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.232 [2024-12-05 03:05:29.819665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.232 [2024-12-05 03:05:29.819674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.232 [2024-12-05 03:05:29.819682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.232 [2024-12-05 03:05:29.819794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.232 [2024-12-05 03:05:29.819805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.232 [2024-12-05 03:05:29.819814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.232 [2024-12-05 03:05:29.819824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.233 [2024-12-05 03:05:29.819864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.233 [2024-12-05 03:05:29.819874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.233 [2024-12-05 03:05:29.819886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.233 [2024-12-05 03:05:29.819895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.233 [2024-12-05 03:05:29.819939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.233 [2024-12-05 03:05:29.819950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.233 [2024-12-05 03:05:29.819958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.233 [2024-12-05 03:05:29.819967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.233 [2024-12-05 03:05:29.820016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.233 [2024-12-05 03:05:29.820030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.233 [2024-12-05 03:05:29.820039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.233 [2024-12-05 03:05:29.820048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.233 [2024-12-05 03:05:29.820234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.503 ms, result 0 00:20:59.806 00:20:59.807 00:20:59.807 03:05:30 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:00.378 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:00.378 03:05:31 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:00.378 03:05:31 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:00.378 03:05:31 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:00.379 03:05:31 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:00.379 03:05:31 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:00.379 03:05:31 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:00.641 Process with pid 76933 is not found 00:21:00.641 03:05:31 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76933 00:21:00.641 03:05:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76933 ']' 00:21:00.641 03:05:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76933 00:21:00.641 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76933) - No such process 00:21:00.641 03:05:31 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76933 is not found' 00:21:00.641 ************************************ 00:21:00.641 END TEST ftl_trim 00:21:00.641 ************************************ 00:21:00.641 00:21:00.641 real 1m15.721s 00:21:00.641 user 1m32.093s 00:21:00.641 sys 0m14.805s 00:21:00.641 03:05:31 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.641 03:05:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:00.641 03:05:31 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:00.641 03:05:31 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:00.641 03:05:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.641 03:05:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:00.641 ************************************ 00:21:00.641 START TEST ftl_restore 00:21:00.641 ************************************ 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:00.641 * Looking for test storage... 00:21:00.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.641 03:05:31 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.641 --rc genhtml_branch_coverage=1 00:21:00.641 --rc genhtml_function_coverage=1 00:21:00.641 --rc genhtml_legend=1 00:21:00.641 --rc geninfo_all_blocks=1 00:21:00.641 --rc geninfo_unexecuted_blocks=1 00:21:00.641 00:21:00.641 ' 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.641 --rc genhtml_branch_coverage=1 00:21:00.641 --rc genhtml_function_coverage=1 00:21:00.641 --rc genhtml_legend=1 00:21:00.641 --rc geninfo_all_blocks=1 00:21:00.641 --rc geninfo_unexecuted_blocks=1 00:21:00.641 00:21:00.641 ' 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.641 --rc genhtml_branch_coverage=1 00:21:00.641 --rc genhtml_function_coverage=1 00:21:00.641 --rc genhtml_legend=1 00:21:00.641 --rc geninfo_all_blocks=1 00:21:00.641 --rc geninfo_unexecuted_blocks=1 00:21:00.641 00:21:00.641 ' 00:21:00.641 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.641 --rc genhtml_branch_coverage=1 00:21:00.641 --rc genhtml_function_coverage=1 00:21:00.641 --rc genhtml_legend=1 00:21:00.641 --rc geninfo_all_blocks=1 00:21:00.641 --rc geninfo_unexecuted_blocks=1 00:21:00.641 00:21:00.641 ' 00:21:00.641 03:05:31 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.642 03:05:31 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.GznryG9J7i 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77230 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77230 00:21:00.903 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77230 ']' 00:21:00.903 03:05:31 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:00.903 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.903 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.903 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.903 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.903 03:05:31 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:00.903 [2024-12-05 03:05:31.576656] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:00.903 [2024-12-05 03:05:31.576978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77230 ] 00:21:00.903 [2024-12-05 03:05:31.743464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.164 [2024-12-05 03:05:31.862205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.733 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.733 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:01.733 03:05:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:01.733 03:05:32 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:01.733 03:05:32 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:01.733 03:05:32 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:01.733 03:05:32 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:01.733 03:05:32 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:02.304 03:05:32 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:02.304 03:05:32 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:02.304 03:05:32 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:02.304 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:02.304 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.304 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:02.304 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:02.304 03:05:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:02.304 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.304 { 00:21:02.304 "name": "nvme0n1", 00:21:02.304 "aliases": [ 00:21:02.304 "63401521-83ca-4494-8fd2-00ffdb65fc46" 00:21:02.304 ], 00:21:02.304 "product_name": "NVMe disk", 00:21:02.304 "block_size": 4096, 00:21:02.304 "num_blocks": 1310720, 00:21:02.304 "uuid": "63401521-83ca-4494-8fd2-00ffdb65fc46", 00:21:02.304 "numa_id": -1, 00:21:02.304 "assigned_rate_limits": { 00:21:02.304 "rw_ios_per_sec": 0, 00:21:02.304 "rw_mbytes_per_sec": 0, 00:21:02.304 "r_mbytes_per_sec": 0, 00:21:02.304 "w_mbytes_per_sec": 0 00:21:02.304 }, 00:21:02.304 "claimed": true, 00:21:02.304 "claim_type": "read_many_write_one", 00:21:02.304 "zoned": false, 00:21:02.304 "supported_io_types": { 00:21:02.304 "read": true, 00:21:02.304 "write": true, 00:21:02.304 "unmap": true, 00:21:02.304 "flush": true, 00:21:02.304 "reset": true, 00:21:02.304 "nvme_admin": true, 00:21:02.304 "nvme_io": true, 00:21:02.305 "nvme_io_md": false, 00:21:02.305 "write_zeroes": true, 00:21:02.305 "zcopy": false, 00:21:02.305 "get_zone_info": false, 00:21:02.305 "zone_management": false, 00:21:02.305 "zone_append": false, 00:21:02.305 "compare": true, 00:21:02.305 "compare_and_write": false, 00:21:02.305 "abort": true, 00:21:02.305 "seek_hole": false, 00:21:02.305 "seek_data": false, 00:21:02.305 "copy": true, 00:21:02.305 "nvme_iov_md": false 00:21:02.305 }, 00:21:02.305 "driver_specific": { 00:21:02.305 "nvme": [ 00:21:02.305 { 00:21:02.305 "pci_address": "0000:00:11.0", 00:21:02.305 "trid": { 00:21:02.305 "trtype": "PCIe", 00:21:02.305 "traddr": "0000:00:11.0" 00:21:02.305 }, 00:21:02.305 "ctrlr_data": { 00:21:02.305 "cntlid": 0, 00:21:02.305 "vendor_id": "0x1b36", 00:21:02.305 "model_number": "QEMU NVMe Ctrl", 00:21:02.305 "serial_number": "12341", 00:21:02.305 "firmware_revision": "8.0.0", 00:21:02.305 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:02.305 "oacs": { 00:21:02.305 "security": 0, 00:21:02.305 "format": 1, 00:21:02.305 "firmware": 0, 00:21:02.305 "ns_manage": 1 00:21:02.305 }, 00:21:02.305 "multi_ctrlr": false, 00:21:02.305 "ana_reporting": false 00:21:02.305 }, 00:21:02.305 "vs": { 00:21:02.305 "nvme_version": "1.4" 00:21:02.305 }, 00:21:02.305 "ns_data": { 00:21:02.305 "id": 1, 00:21:02.305 "can_share": false 00:21:02.305 } 00:21:02.305 } 00:21:02.305 ], 00:21:02.305 "mp_policy": "active_passive" 00:21:02.305 } 00:21:02.305 } 00:21:02.305 ]' 00:21:02.305 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.305 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.305 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.305 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:02.305 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:02.305 03:05:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:02.305 03:05:33 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:02.305 03:05:33 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:02.305 03:05:33 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:02.566 03:05:33 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.566 03:05:33 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:02.566 03:05:33 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d4e8b217-9877-40b2-856c-fca636b4ce6e 00:21:02.566 03:05:33 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:02.566 03:05:33 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4e8b217-9877-40b2-856c-fca636b4ce6e 00:21:02.826 03:05:33 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:03.086 03:05:33 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b5cdcd59-7bd0-4135-a751-6287ffad22b2 00:21:03.086 03:05:33 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b5cdcd59-7bd0-4135-a751-6287ffad22b2 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:03.347 03:05:34 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.347 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.347 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.347 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:03.347 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:03.347 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.606 { 00:21:03.606 "name": "7b31a9b0-acf4-4fa3-89fb-8e72a008fceb", 00:21:03.606 "aliases": [ 00:21:03.606 "lvs/nvme0n1p0" 00:21:03.606 ], 00:21:03.606 "product_name": "Logical Volume", 00:21:03.606 "block_size": 4096, 00:21:03.606 "num_blocks": 26476544, 00:21:03.606 "uuid": "7b31a9b0-acf4-4fa3-89fb-8e72a008fceb", 00:21:03.606 "assigned_rate_limits": { 00:21:03.606 "rw_ios_per_sec": 0, 00:21:03.606 "rw_mbytes_per_sec": 0, 00:21:03.606 "r_mbytes_per_sec": 0, 00:21:03.606 "w_mbytes_per_sec": 0 00:21:03.606 }, 00:21:03.606 "claimed": false, 00:21:03.606 "zoned": false, 00:21:03.606 "supported_io_types": { 00:21:03.606 "read": true, 00:21:03.606 "write": true, 00:21:03.606 "unmap": true, 00:21:03.606 "flush": false, 00:21:03.606 "reset": true, 00:21:03.606 "nvme_admin": false, 00:21:03.606 "nvme_io": false, 00:21:03.606 "nvme_io_md": false, 00:21:03.606 "write_zeroes": true, 00:21:03.606 "zcopy": false, 00:21:03.606 "get_zone_info": false, 00:21:03.606 "zone_management": false, 00:21:03.606 "zone_append": false, 00:21:03.606 "compare": false, 00:21:03.606 "compare_and_write": false, 00:21:03.606 "abort": false, 00:21:03.606 "seek_hole": true, 00:21:03.606 "seek_data": true, 00:21:03.606 "copy": false, 00:21:03.606 "nvme_iov_md": false 00:21:03.606 }, 00:21:03.606 "driver_specific": { 00:21:03.606 "lvol": { 00:21:03.606 "lvol_store_uuid": "b5cdcd59-7bd0-4135-a751-6287ffad22b2", 00:21:03.606 "base_bdev": "nvme0n1", 00:21:03.606 "thin_provision": true, 00:21:03.606 "num_allocated_clusters": 0, 00:21:03.606 "snapshot": false, 00:21:03.606 "clone": false, 00:21:03.606 "esnap_clone": false 00:21:03.606 } 00:21:03.606 } 00:21:03.606 } 00:21:03.606 ]' 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:03.606 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:03.606 03:05:34 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:03.606 03:05:34 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:03.606 03:05:34 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:03.866 03:05:34 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:03.866 03:05:34 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:03.866 03:05:34 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.866 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:03.866 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.866 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:03.866 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:03.866 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.126 { 00:21:04.126 "name": "7b31a9b0-acf4-4fa3-89fb-8e72a008fceb", 00:21:04.126 "aliases": [ 00:21:04.126 "lvs/nvme0n1p0" 00:21:04.126 ], 00:21:04.126 "product_name": "Logical Volume", 00:21:04.126 "block_size": 4096, 00:21:04.126 "num_blocks": 26476544, 00:21:04.126 "uuid": "7b31a9b0-acf4-4fa3-89fb-8e72a008fceb", 00:21:04.126 "assigned_rate_limits": { 00:21:04.126 "rw_ios_per_sec": 0, 00:21:04.126 "rw_mbytes_per_sec": 0, 00:21:04.126 "r_mbytes_per_sec": 0, 00:21:04.126 "w_mbytes_per_sec": 0 00:21:04.126 }, 00:21:04.126 "claimed": false, 00:21:04.126 "zoned": false, 00:21:04.126 "supported_io_types": { 00:21:04.126 "read": true, 00:21:04.126 "write": true, 00:21:04.126 "unmap": true, 00:21:04.126 "flush": false, 00:21:04.126 "reset": true, 00:21:04.126 "nvme_admin": false, 00:21:04.126 "nvme_io": false, 00:21:04.126 "nvme_io_md": false, 00:21:04.126 "write_zeroes": true, 00:21:04.126 "zcopy": false, 00:21:04.126 "get_zone_info": false, 00:21:04.126 "zone_management": false, 00:21:04.126 "zone_append": false, 00:21:04.126 "compare": false, 00:21:04.126 "compare_and_write": false, 00:21:04.126 "abort": false, 00:21:04.126 "seek_hole": true, 00:21:04.126 "seek_data": true, 00:21:04.126 "copy": false, 00:21:04.126 "nvme_iov_md": false 00:21:04.126 }, 00:21:04.126 "driver_specific": { 00:21:04.126 "lvol": { 00:21:04.126 "lvol_store_uuid": "b5cdcd59-7bd0-4135-a751-6287ffad22b2", 00:21:04.126 "base_bdev": "nvme0n1", 00:21:04.126 "thin_provision": true, 00:21:04.126 "num_allocated_clusters": 0, 00:21:04.126 "snapshot": false, 00:21:04.126 "clone": false, 00:21:04.126 "esnap_clone": false 00:21:04.126 } 00:21:04.126 } 00:21:04.126 } 00:21:04.126 ]' 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.126 03:05:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.126 03:05:34 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:04.126 03:05:34 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:04.386 03:05:35 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:04.386 03:05:35 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:04.386 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:04.386 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.386 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:04.386 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:04.386 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.647 { 00:21:04.647 "name": "7b31a9b0-acf4-4fa3-89fb-8e72a008fceb", 00:21:04.647 "aliases": [ 00:21:04.647 "lvs/nvme0n1p0" 00:21:04.647 ], 00:21:04.647 "product_name": "Logical Volume", 00:21:04.647 "block_size": 4096, 00:21:04.647 "num_blocks": 26476544, 00:21:04.647 "uuid": "7b31a9b0-acf4-4fa3-89fb-8e72a008fceb", 00:21:04.647 "assigned_rate_limits": { 00:21:04.647 "rw_ios_per_sec": 0, 00:21:04.647 "rw_mbytes_per_sec": 0, 00:21:04.647 "r_mbytes_per_sec": 0, 00:21:04.647 "w_mbytes_per_sec": 0 00:21:04.647 }, 00:21:04.647 "claimed": false, 00:21:04.647 "zoned": false, 00:21:04.647 "supported_io_types": { 00:21:04.647 "read": true, 00:21:04.647 "write": true, 00:21:04.647 "unmap": true, 00:21:04.647 "flush": false, 00:21:04.647 "reset": true, 00:21:04.647 "nvme_admin": false, 00:21:04.647 "nvme_io": false, 00:21:04.647 "nvme_io_md": false, 00:21:04.647 "write_zeroes": true, 00:21:04.647 "zcopy": false, 00:21:04.647 "get_zone_info": false, 00:21:04.647 "zone_management": false, 00:21:04.647 "zone_append": false, 00:21:04.647 "compare": false, 00:21:04.647 "compare_and_write": false, 00:21:04.647 "abort": false, 00:21:04.647 "seek_hole": true, 00:21:04.647 "seek_data": true, 00:21:04.647 "copy": false, 00:21:04.647 "nvme_iov_md": false 00:21:04.647 }, 00:21:04.647 "driver_specific": { 00:21:04.647 "lvol": { 00:21:04.647 "lvol_store_uuid": "b5cdcd59-7bd0-4135-a751-6287ffad22b2", 00:21:04.647 "base_bdev": "nvme0n1", 00:21:04.647 "thin_provision": true, 00:21:04.647 "num_allocated_clusters": 0, 00:21:04.647 "snapshot": false, 00:21:04.647 "clone": false, 00:21:04.647 "esnap_clone": false 00:21:04.647 } 00:21:04.647 } 00:21:04.647 } 00:21:04.647 ]' 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.647 03:05:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb --l2p_dram_limit 10' 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:04.647 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:04.647 03:05:35 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7b31a9b0-acf4-4fa3-89fb-8e72a008fceb --l2p_dram_limit 10 -c nvc0n1p0 00:21:04.908 [2024-12-05 03:05:35.528294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.528417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.908 [2024-12-05 03:05:35.528436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.908 [2024-12-05 03:05:35.528444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.528500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.528508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.908 [2024-12-05 03:05:35.528515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:04.908 [2024-12-05 03:05:35.528521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.528541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.908 [2024-12-05 03:05:35.529146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.908 [2024-12-05 03:05:35.529163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.529169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.908 [2024-12-05 03:05:35.529178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:21:04.908 [2024-12-05 03:05:35.529184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.529291] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b76b6fa1-c46b-47b1-9df2-5dec0f5783c7 00:21:04.908 [2024-12-05 03:05:35.530235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.530256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:04.908 [2024-12-05 03:05:35.530264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:04.908 [2024-12-05 03:05:35.530273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.534905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.534933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.908 [2024-12-05 03:05:35.534941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.602 ms 00:21:04.908 [2024-12-05 03:05:35.534947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.535012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.535021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.908 [2024-12-05 03:05:35.535027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:04.908 [2024-12-05 03:05:35.535037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.535064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.535087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.908 [2024-12-05 03:05:35.535096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:04.908 [2024-12-05 03:05:35.535102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.535119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.908 [2024-12-05 03:05:35.537943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.537966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.908 [2024-12-05 03:05:35.537975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:21:04.908 [2024-12-05 03:05:35.537981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.538008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.538014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.908 [2024-12-05 03:05:35.538021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:04.908 [2024-12-05 03:05:35.538027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.538052] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:04.908 [2024-12-05 03:05:35.538170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.908 [2024-12-05 03:05:35.538182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.908 [2024-12-05 03:05:35.538191] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.908 [2024-12-05 03:05:35.538200] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.908 [2024-12-05 03:05:35.538206] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.908 [2024-12-05 03:05:35.538214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.908 [2024-12-05 03:05:35.538219] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.908 [2024-12-05 03:05:35.538229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.908 [2024-12-05 03:05:35.538234] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.908 [2024-12-05 03:05:35.538242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.538252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.908 [2024-12-05 03:05:35.538259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:21:04.908 [2024-12-05 03:05:35.538265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.538330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.908 [2024-12-05 03:05:35.538336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.908 [2024-12-05 03:05:35.538343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:04.908 [2024-12-05 03:05:35.538348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.908 [2024-12-05 03:05:35.538425] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.908 [2024-12-05 03:05:35.538432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.908 [2024-12-05 03:05:35.538440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.908 [2024-12-05 03:05:35.538445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.908 [2024-12-05 03:05:35.538452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.908 [2024-12-05 03:05:35.538457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.908 [2024-12-05 03:05:35.538464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.908 [2024-12-05 03:05:35.538469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.908 [2024-12-05 03:05:35.538475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.908 [2024-12-05 03:05:35.538481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.908 [2024-12-05 03:05:35.538488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.908 [2024-12-05 03:05:35.538494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.908 [2024-12-05 03:05:35.538500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.908 [2024-12-05 03:05:35.538505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.908 [2024-12-05 03:05:35.538512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.909 [2024-12-05 03:05:35.538517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.909 [2024-12-05 03:05:35.538531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.909 [2024-12-05 03:05:35.538550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.909 [2024-12-05 03:05:35.538567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.909 [2024-12-05 03:05:35.538584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.909 [2024-12-05 03:05:35.538600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.909 [2024-12-05 03:05:35.538619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.909 [2024-12-05 03:05:35.538630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.909 [2024-12-05 03:05:35.538635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.909 [2024-12-05 03:05:35.538643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.909 [2024-12-05 03:05:35.538647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.909 [2024-12-05 03:05:35.538654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.909 [2024-12-05 03:05:35.538659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.909 [2024-12-05 03:05:35.538669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.909 [2024-12-05 03:05:35.538676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538680] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.909 [2024-12-05 03:05:35.538688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.909 [2024-12-05 03:05:35.538693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.909 [2024-12-05 03:05:35.538706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.909 [2024-12-05 03:05:35.538714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.909 [2024-12-05 03:05:35.538719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.909 [2024-12-05 03:05:35.538726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.909 [2024-12-05 03:05:35.538731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.909 [2024-12-05 03:05:35.538738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.909 [2024-12-05 03:05:35.538744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.909 [2024-12-05 03:05:35.538753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.909 [2024-12-05 03:05:35.538767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.909 [2024-12-05 03:05:35.538772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.909 [2024-12-05 03:05:35.538779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.909 [2024-12-05 03:05:35.538784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.909 [2024-12-05 03:05:35.538790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.909 [2024-12-05 03:05:35.538796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.909 [2024-12-05 03:05:35.538803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.909 [2024-12-05 03:05:35.538809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.909 [2024-12-05 03:05:35.538816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.909 [2024-12-05 03:05:35.538845] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.909 [2024-12-05 03:05:35.538853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.909 [2024-12-05 03:05:35.538866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.909 [2024-12-05 03:05:35.538871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.909 [2024-12-05 03:05:35.538878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.909 [2024-12-05 03:05:35.538884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.909 [2024-12-05 03:05:35.538891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.909 [2024-12-05 03:05:35.538896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:21:04.909 [2024-12-05 03:05:35.538903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.909 [2024-12-05 03:05:35.538930] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:04.909 [2024-12-05 03:05:35.538940] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:08.212 [2024-12-05 03:05:39.049799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.212 [2024-12-05 03:05:39.050154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:08.212 [2024-12-05 03:05:39.050244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3510.852 ms 00:21:08.212 [2024-12-05 03:05:39.050280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.476 [2024-12-05 03:05:39.081612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.476 [2024-12-05 03:05:39.081831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:08.477 [2024-12-05 03:05:39.081949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.081 ms 00:21:08.477 [2024-12-05 03:05:39.081980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.082147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.082179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:08.477 [2024-12-05 03:05:39.082201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:08.477 [2024-12-05 03:05:39.082231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.117348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.117535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:08.477 [2024-12-05 03:05:39.117652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.930 ms 00:21:08.477 [2024-12-05 03:05:39.117669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.117702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.117721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:08.477 [2024-12-05 03:05:39.117731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:08.477 [2024-12-05 03:05:39.117749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.118306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.118348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:08.477 [2024-12-05 03:05:39.118360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:21:08.477 [2024-12-05 03:05:39.118370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.118484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.118496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:08.477 [2024-12-05 03:05:39.118507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:08.477 [2024-12-05 03:05:39.118520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.135671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.135724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.477 [2024-12-05 03:05:39.135735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.133 ms 00:21:08.477 [2024-12-05 03:05:39.135745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.165684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:08.477 [2024-12-05 03:05:39.169474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.169522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.477 [2024-12-05 03:05:39.169537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.632 ms 00:21:08.477 [2024-12-05 03:05:39.169545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.273970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.274026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:08.477 [2024-12-05 03:05:39.274044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.376 ms 00:21:08.477 [2024-12-05 03:05:39.274053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.274274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.274290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.477 [2024-12-05 03:05:39.274305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:21:08.477 [2024-12-05 03:05:39.274313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.477 [2024-12-05 03:05:39.300362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.477 [2024-12-05 03:05:39.300559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:08.477 [2024-12-05 03:05:39.300588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.991 ms 00:21:08.477 [2024-12-05 03:05:39.300597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.325445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.325492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:08.747 [2024-12-05 03:05:39.325508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.702 ms 00:21:08.747 [2024-12-05 03:05:39.325516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.326127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.326145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:08.747 [2024-12-05 03:05:39.326158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:21:08.747 [2024-12-05 03:05:39.326168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.415381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.415559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:08.747 [2024-12-05 03:05:39.415589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.166 ms 00:21:08.747 [2024-12-05 03:05:39.415598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.442706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.442871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:08.747 [2024-12-05 03:05:39.442896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.031 ms 00:21:08.747 [2024-12-05 03:05:39.442904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.468639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.468690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:08.747 [2024-12-05 03:05:39.468705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.690 ms 00:21:08.747 [2024-12-05 03:05:39.468712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.494506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.494553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.747 [2024-12-05 03:05:39.494568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.744 ms 00:21:08.747 [2024-12-05 03:05:39.494576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.494630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.494640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.747 [2024-12-05 03:05:39.494655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:08.747 [2024-12-05 03:05:39.494662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.494750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.747 [2024-12-05 03:05:39.494764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.747 [2024-12-05 03:05:39.494775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:08.747 [2024-12-05 03:05:39.494784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.747 [2024-12-05 03:05:39.496042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3967.260 ms, result 0 00:21:08.747 { 00:21:08.747 "name": "ftl0", 00:21:08.747 "uuid": "b76b6fa1-c46b-47b1-9df2-5dec0f5783c7" 00:21:08.747 } 00:21:08.747 03:05:39 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:08.748 03:05:39 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:09.009 03:05:39 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:09.009 03:05:39 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:09.271 [2024-12-05 03:05:39.923306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.923524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.271 [2024-12-05 03:05:39.923549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.271 [2024-12-05 03:05:39.923561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.923595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.271 [2024-12-05 03:05:39.926633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.926783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.271 [2024-12-05 03:05:39.926808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:21:09.271 [2024-12-05 03:05:39.926817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.927153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.927169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.271 [2024-12-05 03:05:39.927181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:09.271 [2024-12-05 03:05:39.927189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.930433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.930458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.271 [2024-12-05 03:05:39.930471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:21:09.271 [2024-12-05 03:05:39.930479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.936815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.936853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.271 [2024-12-05 03:05:39.936870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.313 ms 00:21:09.271 [2024-12-05 03:05:39.936878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.963247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.963294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.271 [2024-12-05 03:05:39.963309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.295 ms 00:21:09.271 [2024-12-05 03:05:39.963317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.981142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.981189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.271 [2024-12-05 03:05:39.981205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.767 ms 00:21:09.271 [2024-12-05 03:05:39.981214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:39.981385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:39.981397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.271 [2024-12-05 03:05:39.981409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:21:09.271 [2024-12-05 03:05:39.981417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:40.007619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:40.007671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.271 [2024-12-05 03:05:40.007686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.173 ms 00:21:09.271 [2024-12-05 03:05:40.007694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:40.039946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:40.040298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.271 [2024-12-05 03:05:40.040341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.183 ms 00:21:09.271 [2024-12-05 03:05:40.040354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:40.065512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:40.065565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.271 [2024-12-05 03:05:40.065582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.084 ms 00:21:09.271 [2024-12-05 03:05:40.065590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:40.090791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.271 [2024-12-05 03:05:40.090839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.271 [2024-12-05 03:05:40.090854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.098 ms 00:21:09.271 [2024-12-05 03:05:40.090862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.271 [2024-12-05 03:05:40.090912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.272 [2024-12-05 03:05:40.090929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.090946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.090954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.090965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.090973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.090984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.090992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.272 [2024-12-05 03:05:40.091663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.273 [2024-12-05 03:05:40.091903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.273 [2024-12-05 03:05:40.091913] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b76b6fa1-c46b-47b1-9df2-5dec0f5783c7 00:21:09.273 [2024-12-05 03:05:40.091921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.273 [2024-12-05 03:05:40.091933] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.273 [2024-12-05 03:05:40.091943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.273 [2024-12-05 03:05:40.091953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.273 [2024-12-05 03:05:40.091961] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.273 [2024-12-05 03:05:40.091971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.273 [2024-12-05 03:05:40.091978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.273 [2024-12-05 03:05:40.091987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.273 [2024-12-05 03:05:40.091993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.273 [2024-12-05 03:05:40.092002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.273 [2024-12-05 03:05:40.092010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.273 [2024-12-05 03:05:40.092021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:21:09.273 [2024-12-05 03:05:40.092031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.273 [2024-12-05 03:05:40.105798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.273 [2024-12-05 03:05:40.105842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.273 [2024-12-05 03:05:40.105856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.694 ms 00:21:09.273 [2024-12-05 03:05:40.105865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.273 [2024-12-05 03:05:40.106314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.273 [2024-12-05 03:05:40.106331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.273 [2024-12-05 03:05:40.106347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:21:09.273 [2024-12-05 03:05:40.106356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.152643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.152691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.534 [2024-12-05 03:05:40.152706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.152715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.152789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.152797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.534 [2024-12-05 03:05:40.152810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.152819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.152903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.152914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.534 [2024-12-05 03:05:40.152925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.152933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.152957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.152965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.534 [2024-12-05 03:05:40.152976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.152986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.236740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.236980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.534 [2024-12-05 03:05:40.237007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.237016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.534 [2024-12-05 03:05:40.305250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.534 [2024-12-05 03:05:40.305391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.534 [2024-12-05 03:05:40.305473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.534 [2024-12-05 03:05:40.305612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:09.534 [2024-12-05 03:05:40.305678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.534 [2024-12-05 03:05:40.305754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.534 [2024-12-05 03:05:40.305827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.534 [2024-12-05 03:05:40.305837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.534 [2024-12-05 03:05:40.305846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.534 [2024-12-05 03:05:40.305997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.652 ms, result 0 00:21:09.534 true 00:21:09.534 03:05:40 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77230 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77230 ']' 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77230 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77230 00:21:09.534 killing process with pid 77230 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77230' 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77230 00:21:09.534 03:05:40 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77230 00:21:16.190 03:05:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:20.394 262144+0 records in 00:21:20.394 262144+0 records out 00:21:20.394 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.35356 s, 247 MB/s 00:21:20.394 03:05:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:21.778 03:05:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:21.778 [2024-12-05 03:05:52.500238] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:21.778 [2024-12-05 03:05:52.500710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77460 ] 00:21:22.039 [2024-12-05 03:05:52.656674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.039 [2024-12-05 03:05:52.755510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.298 [2024-12-05 03:05:53.043654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:22.298 [2024-12-05 03:05:53.043743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:22.560 [2024-12-05 03:05:53.205230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.205474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:22.560 [2024-12-05 03:05:53.205499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.560 [2024-12-05 03:05:53.205509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.205580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.205594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.560 [2024-12-05 03:05:53.205603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:22.560 [2024-12-05 03:05:53.205611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.205632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:22.560 [2024-12-05 03:05:53.206357] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:22.560 [2024-12-05 03:05:53.206377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.206386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.560 [2024-12-05 03:05:53.206395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:21:22.560 [2024-12-05 03:05:53.206404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.208041] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:22.560 [2024-12-05 03:05:53.222384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.222433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:22.560 [2024-12-05 03:05:53.222446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.345 ms 00:21:22.560 [2024-12-05 03:05:53.222455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.222537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.222548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:22.560 [2024-12-05 03:05:53.222557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:22.560 [2024-12-05 03:05:53.222565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.230551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.230597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.560 [2024-12-05 03:05:53.230608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.910 ms 00:21:22.560 [2024-12-05 03:05:53.230623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.230701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.230711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.560 [2024-12-05 03:05:53.230720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:22.560 [2024-12-05 03:05:53.230727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.560 [2024-12-05 03:05:53.230772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.560 [2024-12-05 03:05:53.230782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:22.561 [2024-12-05 03:05:53.230790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:22.561 [2024-12-05 03:05:53.230798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.561 [2024-12-05 03:05:53.230825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:22.561 [2024-12-05 03:05:53.234839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.561 [2024-12-05 03:05:53.234879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.561 [2024-12-05 03:05:53.234894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.020 ms 00:21:22.561 [2024-12-05 03:05:53.234902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.561 [2024-12-05 03:05:53.234938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.561 [2024-12-05 03:05:53.234946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:22.561 [2024-12-05 03:05:53.234955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:22.561 [2024-12-05 03:05:53.234963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.561 [2024-12-05 03:05:53.235013] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:22.561 [2024-12-05 03:05:53.235038] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:22.561 [2024-12-05 03:05:53.235099] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:22.561 [2024-12-05 03:05:53.235119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:22.561 [2024-12-05 03:05:53.235227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:22.561 [2024-12-05 03:05:53.235239] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:22.561 [2024-12-05 03:05:53.235250] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:22.561 [2024-12-05 03:05:53.235260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235270] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:22.561 [2024-12-05 03:05:53.235287] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:22.561 [2024-12-05 03:05:53.235299] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:22.561 [2024-12-05 03:05:53.235307] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:22.561 [2024-12-05 03:05:53.235314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.561 [2024-12-05 03:05:53.235322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:22.561 [2024-12-05 03:05:53.235331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:21:22.561 [2024-12-05 03:05:53.235338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.561 [2024-12-05 03:05:53.235421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.561 [2024-12-05 03:05:53.235430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:22.561 [2024-12-05 03:05:53.235438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:22.561 [2024-12-05 03:05:53.235445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.561 [2024-12-05 03:05:53.235552] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:22.561 [2024-12-05 03:05:53.235563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:22.561 [2024-12-05 03:05:53.235571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:22.561 [2024-12-05 03:05:53.235595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:22.561 [2024-12-05 03:05:53.235618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.561 [2024-12-05 03:05:53.235632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:22.561 [2024-12-05 03:05:53.235639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:22.561 [2024-12-05 03:05:53.235646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.561 [2024-12-05 03:05:53.235660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:22.561 [2024-12-05 03:05:53.235668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:22.561 [2024-12-05 03:05:53.235675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:22.561 [2024-12-05 03:05:53.235689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:22.561 [2024-12-05 03:05:53.235711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:22.561 [2024-12-05 03:05:53.235730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:22.561 [2024-12-05 03:05:53.235749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:22.561 [2024-12-05 03:05:53.235770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:22.561 [2024-12-05 03:05:53.235790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.561 [2024-12-05 03:05:53.235803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:22.561 [2024-12-05 03:05:53.235810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:22.561 [2024-12-05 03:05:53.235816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.561 [2024-12-05 03:05:53.235823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:22.561 [2024-12-05 03:05:53.235829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:22.561 [2024-12-05 03:05:53.235836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:22.561 [2024-12-05 03:05:53.235849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:22.561 [2024-12-05 03:05:53.235856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:22.561 [2024-12-05 03:05:53.235875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:22.561 [2024-12-05 03:05:53.235883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.561 [2024-12-05 03:05:53.235898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:22.561 [2024-12-05 03:05:53.235905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:22.561 [2024-12-05 03:05:53.235912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:22.561 [2024-12-05 03:05:53.235919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:22.561 [2024-12-05 03:05:53.235926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:22.561 [2024-12-05 03:05:53.235933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:22.561 [2024-12-05 03:05:53.235941] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:22.561 [2024-12-05 03:05:53.235950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.561 [2024-12-05 03:05:53.235962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:22.561 [2024-12-05 03:05:53.235969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:22.561 [2024-12-05 03:05:53.235976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:22.561 [2024-12-05 03:05:53.235984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:22.561 [2024-12-05 03:05:53.235991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:22.561 [2024-12-05 03:05:53.235998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:22.561 [2024-12-05 03:05:53.236005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:22.562 [2024-12-05 03:05:53.236012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:22.562 [2024-12-05 03:05:53.236020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:22.562 [2024-12-05 03:05:53.236027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:22.562 [2024-12-05 03:05:53.236035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:22.562 [2024-12-05 03:05:53.236041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:22.562 [2024-12-05 03:05:53.236048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:22.562 [2024-12-05 03:05:53.236056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:22.562 [2024-12-05 03:05:53.236063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:22.562 [2024-12-05 03:05:53.236085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.562 [2024-12-05 03:05:53.236094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:22.562 [2024-12-05 03:05:53.236102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:22.562 [2024-12-05 03:05:53.236109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:22.562 [2024-12-05 03:05:53.236117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:22.562 [2024-12-05 03:05:53.236125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.236133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:22.562 [2024-12-05 03:05:53.236141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:21:22.562 [2024-12-05 03:05:53.236149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.268805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.268998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.562 [2024-12-05 03:05:53.269085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.607 ms 00:21:22.562 [2024-12-05 03:05:53.269120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.269224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.269300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:22.562 [2024-12-05 03:05:53.269324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:22.562 [2024-12-05 03:05:53.269344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.316405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.316609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.562 [2024-12-05 03:05:53.317026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.948 ms 00:21:22.562 [2024-12-05 03:05:53.317113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.317234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.317267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.562 [2024-12-05 03:05:53.317302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.562 [2024-12-05 03:05:53.317322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.317921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.317992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.562 [2024-12-05 03:05:53.318016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:21:22.562 [2024-12-05 03:05:53.318034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.318225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.318253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.562 [2024-12-05 03:05:53.318280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:22.562 [2024-12-05 03:05:53.318299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.333897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.334085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.562 [2024-12-05 03:05:53.334409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.495 ms 00:21:22.562 [2024-12-05 03:05:53.334449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.348684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:22.562 [2024-12-05 03:05:53.348867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:22.562 [2024-12-05 03:05:53.348934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.348955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:22.562 [2024-12-05 03:05:53.348975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.324 ms 00:21:22.562 [2024-12-05 03:05:53.348993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.374967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.375166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:22.562 [2024-12-05 03:05:53.375185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.923 ms 00:21:22.562 [2024-12-05 03:05:53.375194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.388153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.388199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:22.562 [2024-12-05 03:05:53.388210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.913 ms 00:21:22.562 [2024-12-05 03:05:53.388218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.401161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.401205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:22.562 [2024-12-05 03:05:53.401217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 00:21:22.562 [2024-12-05 03:05:53.401225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.562 [2024-12-05 03:05:53.401862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.562 [2024-12-05 03:05:53.401878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:22.562 [2024-12-05 03:05:53.401888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:21:22.562 [2024-12-05 03:05:53.401898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.468129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.468363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:22.823 [2024-12-05 03:05:53.468385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.211 ms 00:21:22.823 [2024-12-05 03:05:53.468401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.479560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:22.823 [2024-12-05 03:05:53.482651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.482695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:22.823 [2024-12-05 03:05:53.482707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.117 ms 00:21:22.823 [2024-12-05 03:05:53.482716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.482798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.482809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:22.823 [2024-12-05 03:05:53.482819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:22.823 [2024-12-05 03:05:53.482827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.482900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.482911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:22.823 [2024-12-05 03:05:53.482920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:22.823 [2024-12-05 03:05:53.482928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.482949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.482958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:22.823 [2024-12-05 03:05:53.482967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:22.823 [2024-12-05 03:05:53.482975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.483008] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:22.823 [2024-12-05 03:05:53.483022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.483029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:22.823 [2024-12-05 03:05:53.483038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:22.823 [2024-12-05 03:05:53.483046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.508691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.508870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.823 [2024-12-05 03:05:53.508930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.625 ms 00:21:22.823 [2024-12-05 03:05:53.508961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.509154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.823 [2024-12-05 03:05:53.509184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.823 [2024-12-05 03:05:53.509195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:22.823 [2024-12-05 03:05:53.509204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.823 [2024-12-05 03:05:53.510563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.822 ms, result 0 00:21:23.762  [2024-12-05T03:05:55.549Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T03:05:56.925Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-05T03:05:57.889Z] Copying: 42/1024 [MB] (21 MBps) [2024-12-05T03:05:58.833Z] Copying: 55/1024 [MB] (13 MBps) [2024-12-05T03:05:59.770Z] Copying: 67424/1048576 [kB] (10140 kBps) [2024-12-05T03:06:00.703Z] Copying: 79/1024 [MB] (13 MBps) [2024-12-05T03:06:01.636Z] Copying: 97/1024 [MB] (18 MBps) [2024-12-05T03:06:02.578Z] Copying: 116/1024 [MB] (18 MBps) [2024-12-05T03:06:03.962Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-05T03:06:04.535Z] Copying: 146/1024 [MB] (18 MBps) [2024-12-05T03:06:05.919Z] Copying: 157/1024 [MB] (10 MBps) [2024-12-05T03:06:06.857Z] Copying: 169/1024 [MB] (12 MBps) [2024-12-05T03:06:07.790Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-05T03:06:08.723Z] Copying: 197/1024 [MB] (17 MBps) [2024-12-05T03:06:09.658Z] Copying: 214/1024 [MB] (17 MBps) [2024-12-05T03:06:10.598Z] Copying: 231/1024 [MB] (16 MBps) [2024-12-05T03:06:11.531Z] Copying: 243/1024 [MB] (11 MBps) [2024-12-05T03:06:12.913Z] Copying: 261/1024 [MB] (18 MBps) [2024-12-05T03:06:13.845Z] Copying: 275/1024 [MB] (13 MBps) [2024-12-05T03:06:14.540Z] Copying: 293/1024 [MB] (18 MBps) [2024-12-05T03:06:15.912Z] Copying: 311/1024 [MB] (17 MBps) [2024-12-05T03:06:16.847Z] Copying: 328/1024 [MB] (17 MBps) [2024-12-05T03:06:17.786Z] Copying: 346/1024 [MB] (17 MBps) [2024-12-05T03:06:18.722Z] Copying: 357/1024 [MB] (11 MBps) [2024-12-05T03:06:19.665Z] Copying: 375/1024 [MB] (17 MBps) [2024-12-05T03:06:20.610Z] Copying: 387/1024 [MB] (11 MBps) [2024-12-05T03:06:21.547Z] Copying: 400/1024 [MB] (12 MBps) [2024-12-05T03:06:22.926Z] Copying: 415/1024 [MB] (15 MBps) [2024-12-05T03:06:23.860Z] Copying: 431/1024 [MB] (15 MBps) [2024-12-05T03:06:24.799Z] Copying: 446/1024 [MB] (14 MBps) [2024-12-05T03:06:25.737Z] Copying: 462/1024 [MB] (16 MBps) [2024-12-05T03:06:26.696Z] Copying: 472/1024 [MB] (10 MBps) [2024-12-05T03:06:27.636Z] Copying: 489/1024 [MB] (17 MBps) [2024-12-05T03:06:28.570Z] Copying: 501/1024 [MB] (11 MBps) [2024-12-05T03:06:29.941Z] Copying: 515/1024 [MB] (13 MBps) [2024-12-05T03:06:30.871Z] Copying: 531/1024 [MB] (16 MBps) [2024-12-05T03:06:31.804Z] Copying: 547/1024 [MB] (16 MBps) [2024-12-05T03:06:32.738Z] Copying: 564/1024 [MB] (16 MBps) [2024-12-05T03:06:33.671Z] Copying: 580/1024 [MB] (16 MBps) [2024-12-05T03:06:34.613Z] Copying: 597/1024 [MB] (17 MBps) [2024-12-05T03:06:35.548Z] Copying: 609/1024 [MB] (11 MBps) [2024-12-05T03:06:36.921Z] Copying: 624/1024 [MB] (15 MBps) [2024-12-05T03:06:37.855Z] Copying: 641/1024 [MB] (16 MBps) [2024-12-05T03:06:38.790Z] Copying: 657/1024 [MB] (16 MBps) [2024-12-05T03:06:39.725Z] Copying: 674/1024 [MB] (16 MBps) [2024-12-05T03:06:40.660Z] Copying: 691/1024 [MB] (17 MBps) [2024-12-05T03:06:41.597Z] Copying: 708/1024 [MB] (17 MBps) [2024-12-05T03:06:42.543Z] Copying: 725/1024 [MB] (16 MBps) [2024-12-05T03:06:43.963Z] Copying: 735/1024 [MB] (10 MBps) [2024-12-05T03:06:44.528Z] Copying: 749/1024 [MB] (14 MBps) [2024-12-05T03:06:45.898Z] Copying: 766/1024 [MB] (16 MBps) [2024-12-05T03:06:46.845Z] Copying: 783/1024 [MB] (16 MBps) [2024-12-05T03:06:47.782Z] Copying: 797/1024 [MB] (13 MBps) [2024-12-05T03:06:48.716Z] Copying: 808/1024 [MB] (11 MBps) [2024-12-05T03:06:49.654Z] Copying: 825/1024 [MB] (16 MBps) [2024-12-05T03:06:50.599Z] Copying: 840/1024 [MB] (15 MBps) [2024-12-05T03:06:51.533Z] Copying: 851/1024 [MB] (10 MBps) [2024-12-05T03:06:52.906Z] Copying: 866/1024 [MB] (15 MBps) [2024-12-05T03:06:53.838Z] Copying: 882/1024 [MB] (15 MBps) [2024-12-05T03:06:54.781Z] Copying: 897/1024 [MB] (15 MBps) [2024-12-05T03:06:55.723Z] Copying: 911/1024 [MB] (13 MBps) [2024-12-05T03:06:56.654Z] Copying: 923/1024 [MB] (12 MBps) [2024-12-05T03:06:57.586Z] Copying: 939/1024 [MB] (15 MBps) [2024-12-05T03:06:58.960Z] Copying: 954/1024 [MB] (15 MBps) [2024-12-05T03:06:59.526Z] Copying: 970/1024 [MB] (15 MBps) [2024-12-05T03:07:00.898Z] Copying: 986/1024 [MB] (15 MBps) [2024-12-05T03:07:01.832Z] Copying: 1001/1024 [MB] (15 MBps) [2024-12-05T03:07:02.091Z] Copying: 1017/1024 [MB] (15 MBps) [2024-12-05T03:07:02.091Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-05 03:07:01.935804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.935909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:31.247 [2024-12-05 03:07:01.935937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:31.247 [2024-12-05 03:07:01.935953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.936017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:31.247 [2024-12-05 03:07:01.938339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.938425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:31.247 [2024-12-05 03:07:01.938478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:22:31.247 [2024-12-05 03:07:01.938495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.940446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.940535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:31.247 [2024-12-05 03:07:01.940632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:22:31.247 [2024-12-05 03:07:01.940641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.954821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.954851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:31.247 [2024-12-05 03:07:01.954862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:22:31.247 [2024-12-05 03:07:01.954868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.959680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.959702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:31.247 [2024-12-05 03:07:01.959710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.784 ms 00:22:31.247 [2024-12-05 03:07:01.959718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.979204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.979232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:31.247 [2024-12-05 03:07:01.979241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.446 ms 00:22:31.247 [2024-12-05 03:07:01.979247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.991307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.991332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:31.247 [2024-12-05 03:07:01.991341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.033 ms 00:22:31.247 [2024-12-05 03:07:01.991349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:01.991442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:01.991452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:31.247 [2024-12-05 03:07:01.991459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:31.247 [2024-12-05 03:07:01.991466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:02.009842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:02.009865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:31.247 [2024-12-05 03:07:02.009872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.365 ms 00:22:31.247 [2024-12-05 03:07:02.009879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:02.027591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:02.027615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:31.247 [2024-12-05 03:07:02.027623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.688 ms 00:22:31.247 [2024-12-05 03:07:02.027629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:02.044960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:02.044991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:31.247 [2024-12-05 03:07:02.044999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.306 ms 00:22:31.247 [2024-12-05 03:07:02.045005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:02.063185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.247 [2024-12-05 03:07:02.063208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:31.247 [2024-12-05 03:07:02.063216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:22:31.247 [2024-12-05 03:07:02.063223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.247 [2024-12-05 03:07:02.063247] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:31.247 [2024-12-05 03:07:02.063259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:31.247 [2024-12-05 03:07:02.063408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:31.248 [2024-12-05 03:07:02.063867] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:31.248 [2024-12-05 03:07:02.063880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b76b6fa1-c46b-47b1-9df2-5dec0f5783c7 00:22:31.248 [2024-12-05 03:07:02.063889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:31.248 [2024-12-05 03:07:02.063895] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:31.248 [2024-12-05 03:07:02.063901] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:31.248 [2024-12-05 03:07:02.063907] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:31.248 [2024-12-05 03:07:02.063913] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:31.248 [2024-12-05 03:07:02.063924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:31.248 [2024-12-05 03:07:02.063929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:31.248 [2024-12-05 03:07:02.063934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:31.248 [2024-12-05 03:07:02.063939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:31.248 [2024-12-05 03:07:02.063945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.248 [2024-12-05 03:07:02.063952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:31.248 [2024-12-05 03:07:02.063958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:22:31.248 [2024-12-05 03:07:02.063964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.248 [2024-12-05 03:07:02.073986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.248 [2024-12-05 03:07:02.074009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:31.248 [2024-12-05 03:07:02.074017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.007 ms 00:22:31.248 [2024-12-05 03:07:02.074023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.249 [2024-12-05 03:07:02.074319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.249 [2024-12-05 03:07:02.074333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:31.249 [2024-12-05 03:07:02.074339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:31.249 [2024-12-05 03:07:02.074348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.101803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.101829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.507 [2024-12-05 03:07:02.101837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.101843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.101889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.101896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.507 [2024-12-05 03:07:02.101903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.101911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.101952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.101960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.507 [2024-12-05 03:07:02.101966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.101972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.101983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.101990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.507 [2024-12-05 03:07:02.101997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.102003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.165284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.165320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.507 [2024-12-05 03:07:02.165330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.165337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.507 [2024-12-05 03:07:02.216569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:31.507 [2024-12-05 03:07:02.216663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:31.507 [2024-12-05 03:07:02.216715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:31.507 [2024-12-05 03:07:02.216817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:31.507 [2024-12-05 03:07:02.216861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:31.507 [2024-12-05 03:07:02.216919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.216965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.507 [2024-12-05 03:07:02.216973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:31.507 [2024-12-05 03:07:02.216980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.507 [2024-12-05 03:07:02.216986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.507 [2024-12-05 03:07:02.217108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 281.258 ms, result 0 00:22:32.075 00:22:32.075 00:22:32.075 03:07:02 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:32.075 [2024-12-05 03:07:02.901348] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:32.075 [2024-12-05 03:07:02.901464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78193 ] 00:22:32.333 [2024-12-05 03:07:03.058341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.333 [2024-12-05 03:07:03.150504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.591 [2024-12-05 03:07:03.382257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:32.591 [2024-12-05 03:07:03.382315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:32.851 [2024-12-05 03:07:03.538044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.538091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:32.851 [2024-12-05 03:07:03.538104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:32.851 [2024-12-05 03:07:03.538111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.538150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.538160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.851 [2024-12-05 03:07:03.538167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:32.851 [2024-12-05 03:07:03.538173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.538186] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:32.851 [2024-12-05 03:07:03.538733] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:32.851 [2024-12-05 03:07:03.538751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.538758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.851 [2024-12-05 03:07:03.538765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:22:32.851 [2024-12-05 03:07:03.538771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.539990] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:32.851 [2024-12-05 03:07:03.550538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.550566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:32.851 [2024-12-05 03:07:03.550577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.549 ms 00:22:32.851 [2024-12-05 03:07:03.550583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.550633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.550640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:32.851 [2024-12-05 03:07:03.550647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:32.851 [2024-12-05 03:07:03.550653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.556945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.556969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.851 [2024-12-05 03:07:03.556977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:22:32.851 [2024-12-05 03:07:03.556985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.557039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.557046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.851 [2024-12-05 03:07:03.557053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:32.851 [2024-12-05 03:07:03.557059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.557106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.557115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:32.851 [2024-12-05 03:07:03.557126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:32.851 [2024-12-05 03:07:03.557132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.557152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:32.851 [2024-12-05 03:07:03.560169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.560191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.851 [2024-12-05 03:07:03.560200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:22:32.851 [2024-12-05 03:07:03.560206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.560232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.560239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:32.851 [2024-12-05 03:07:03.560245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:32.851 [2024-12-05 03:07:03.560251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.560267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:32.851 [2024-12-05 03:07:03.560285] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:32.851 [2024-12-05 03:07:03.560314] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:32.851 [2024-12-05 03:07:03.560329] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:32.851 [2024-12-05 03:07:03.560413] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:32.851 [2024-12-05 03:07:03.560422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:32.851 [2024-12-05 03:07:03.560431] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:32.851 [2024-12-05 03:07:03.560439] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:32.851 [2024-12-05 03:07:03.560446] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:32.851 [2024-12-05 03:07:03.560452] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:32.851 [2024-12-05 03:07:03.560459] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:32.851 [2024-12-05 03:07:03.560468] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:32.851 [2024-12-05 03:07:03.560474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:32.851 [2024-12-05 03:07:03.560480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.560486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:32.851 [2024-12-05 03:07:03.560492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:22:32.851 [2024-12-05 03:07:03.560497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.851 [2024-12-05 03:07:03.560577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.851 [2024-12-05 03:07:03.560586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:32.852 [2024-12-05 03:07:03.560591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:32.852 [2024-12-05 03:07:03.560597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.852 [2024-12-05 03:07:03.560677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:32.852 [2024-12-05 03:07:03.560686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:32.852 [2024-12-05 03:07:03.560692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:32.852 [2024-12-05 03:07:03.560712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:32.852 [2024-12-05 03:07:03.560729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.852 [2024-12-05 03:07:03.560742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:32.852 [2024-12-05 03:07:03.560747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:32.852 [2024-12-05 03:07:03.560752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.852 [2024-12-05 03:07:03.560763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:32.852 [2024-12-05 03:07:03.560769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:32.852 [2024-12-05 03:07:03.560775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:32.852 [2024-12-05 03:07:03.560785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:32.852 [2024-12-05 03:07:03.560801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:32.852 [2024-12-05 03:07:03.560817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:32.852 [2024-12-05 03:07:03.560834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:32.852 [2024-12-05 03:07:03.560850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:32.852 [2024-12-05 03:07:03.560866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.852 [2024-12-05 03:07:03.560877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:32.852 [2024-12-05 03:07:03.560883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:32.852 [2024-12-05 03:07:03.560888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.852 [2024-12-05 03:07:03.560893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:32.852 [2024-12-05 03:07:03.560898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:32.852 [2024-12-05 03:07:03.560903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:32.852 [2024-12-05 03:07:03.560915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:32.852 [2024-12-05 03:07:03.560920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:32.852 [2024-12-05 03:07:03.560932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:32.852 [2024-12-05 03:07:03.560938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.852 [2024-12-05 03:07:03.560949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:32.852 [2024-12-05 03:07:03.560955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:32.852 [2024-12-05 03:07:03.560959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:32.852 [2024-12-05 03:07:03.560965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:32.852 [2024-12-05 03:07:03.560971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:32.852 [2024-12-05 03:07:03.560976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:32.852 [2024-12-05 03:07:03.560982] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:32.852 [2024-12-05 03:07:03.560989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.560997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:32.852 [2024-12-05 03:07:03.561003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:32.852 [2024-12-05 03:07:03.561008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:32.852 [2024-12-05 03:07:03.561014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:32.852 [2024-12-05 03:07:03.561019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:32.852 [2024-12-05 03:07:03.561025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:32.852 [2024-12-05 03:07:03.561030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:32.852 [2024-12-05 03:07:03.561035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:32.852 [2024-12-05 03:07:03.561042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:32.852 [2024-12-05 03:07:03.561048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.561054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.561059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.561064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.561080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:32.852 [2024-12-05 03:07:03.561086] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:32.852 [2024-12-05 03:07:03.561093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.561099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:32.852 [2024-12-05 03:07:03.561105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:32.852 [2024-12-05 03:07:03.561112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:32.852 [2024-12-05 03:07:03.561118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:32.852 [2024-12-05 03:07:03.561125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.852 [2024-12-05 03:07:03.561131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:32.852 [2024-12-05 03:07:03.561137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:22:32.852 [2024-12-05 03:07:03.561143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.852 [2024-12-05 03:07:03.585404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.852 [2024-12-05 03:07:03.585434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.852 [2024-12-05 03:07:03.585443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.218 ms 00:22:32.852 [2024-12-05 03:07:03.585452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.852 [2024-12-05 03:07:03.585518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.852 [2024-12-05 03:07:03.585525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:32.852 [2024-12-05 03:07:03.585531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:32.852 [2024-12-05 03:07:03.585537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.852 [2024-12-05 03:07:03.623992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.852 [2024-12-05 03:07:03.624025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.852 [2024-12-05 03:07:03.624035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.414 ms 00:22:32.852 [2024-12-05 03:07:03.624042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.852 [2024-12-05 03:07:03.624083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.852 [2024-12-05 03:07:03.624091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.852 [2024-12-05 03:07:03.624101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:32.852 [2024-12-05 03:07:03.624107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.852 [2024-12-05 03:07:03.624531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.852 [2024-12-05 03:07:03.624550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.852 [2024-12-05 03:07:03.624558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:22:32.853 [2024-12-05 03:07:03.624565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.624680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.624695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.853 [2024-12-05 03:07:03.624703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:32.853 [2024-12-05 03:07:03.624714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.636870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.636893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.853 [2024-12-05 03:07:03.636903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.139 ms 00:22:32.853 [2024-12-05 03:07:03.636909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.647468] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:32.853 [2024-12-05 03:07:03.647495] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:32.853 [2024-12-05 03:07:03.647505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.647512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:32.853 [2024-12-05 03:07:03.647520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.508 ms 00:22:32.853 [2024-12-05 03:07:03.647526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.666157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.666185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:32.853 [2024-12-05 03:07:03.666195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.600 ms 00:22:32.853 [2024-12-05 03:07:03.666202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.675694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.675718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:32.853 [2024-12-05 03:07:03.675726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.452 ms 00:22:32.853 [2024-12-05 03:07:03.675732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.684849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.684874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:32.853 [2024-12-05 03:07:03.684882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.091 ms 00:22:32.853 [2024-12-05 03:07:03.684888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.853 [2024-12-05 03:07:03.685370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.853 [2024-12-05 03:07:03.685387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:32.853 [2024-12-05 03:07:03.685397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:22:32.853 [2024-12-05 03:07:03.685403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.113 [2024-12-05 03:07:03.733068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.113 [2024-12-05 03:07:03.733107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:33.113 [2024-12-05 03:07:03.733121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.651 ms 00:22:33.113 [2024-12-05 03:07:03.733128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.113 [2024-12-05 03:07:03.741312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:33.113 [2024-12-05 03:07:03.743409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.113 [2024-12-05 03:07:03.743433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:33.113 [2024-12-05 03:07:03.743443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.247 ms 00:22:33.114 [2024-12-05 03:07:03.743451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.743507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.114 [2024-12-05 03:07:03.743516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:33.114 [2024-12-05 03:07:03.743526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:33.114 [2024-12-05 03:07:03.743534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.743609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.114 [2024-12-05 03:07:03.743619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:33.114 [2024-12-05 03:07:03.743626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:33.114 [2024-12-05 03:07:03.743632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.743652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.114 [2024-12-05 03:07:03.743659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:33.114 [2024-12-05 03:07:03.743666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:33.114 [2024-12-05 03:07:03.743672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.743702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:33.114 [2024-12-05 03:07:03.743711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.114 [2024-12-05 03:07:03.743717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:33.114 [2024-12-05 03:07:03.743724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:33.114 [2024-12-05 03:07:03.743730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.762223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.114 [2024-12-05 03:07:03.762253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:33.114 [2024-12-05 03:07:03.762266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.479 ms 00:22:33.114 [2024-12-05 03:07:03.762274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.762329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.114 [2024-12-05 03:07:03.762337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:33.114 [2024-12-05 03:07:03.762343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:33.114 [2024-12-05 03:07:03.762349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.114 [2024-12-05 03:07:03.763449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.015 ms, result 0 00:22:34.498  [2024-12-05T03:07:05.912Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T03:07:07.284Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-05T03:07:08.217Z] Copying: 38/1024 [MB] (15 MBps) [2024-12-05T03:07:09.159Z] Copying: 55/1024 [MB] (16 MBps) [2024-12-05T03:07:10.117Z] Copying: 65/1024 [MB] (10 MBps) [2024-12-05T03:07:11.052Z] Copying: 76/1024 [MB] (10 MBps) [2024-12-05T03:07:12.063Z] Copying: 91/1024 [MB] (14 MBps) [2024-12-05T03:07:13.003Z] Copying: 105/1024 [MB] (14 MBps) [2024-12-05T03:07:13.938Z] Copying: 119/1024 [MB] (13 MBps) [2024-12-05T03:07:15.312Z] Copying: 134/1024 [MB] (15 MBps) [2024-12-05T03:07:16.253Z] Copying: 150/1024 [MB] (15 MBps) [2024-12-05T03:07:17.195Z] Copying: 163/1024 [MB] (13 MBps) [2024-12-05T03:07:18.129Z] Copying: 174/1024 [MB] (10 MBps) [2024-12-05T03:07:19.075Z] Copying: 191/1024 [MB] (16 MBps) [2024-12-05T03:07:20.013Z] Copying: 203/1024 [MB] (11 MBps) [2024-12-05T03:07:20.954Z] Copying: 217/1024 [MB] (14 MBps) [2024-12-05T03:07:22.342Z] Copying: 230/1024 [MB] (12 MBps) [2024-12-05T03:07:22.915Z] Copying: 240/1024 [MB] (10 MBps) [2024-12-05T03:07:24.302Z] Copying: 251/1024 [MB] (10 MBps) [2024-12-05T03:07:25.238Z] Copying: 261/1024 [MB] (10 MBps) [2024-12-05T03:07:26.171Z] Copying: 273/1024 [MB] (11 MBps) [2024-12-05T03:07:27.108Z] Copying: 288/1024 [MB] (14 MBps) [2024-12-05T03:07:28.046Z] Copying: 301/1024 [MB] (13 MBps) [2024-12-05T03:07:28.982Z] Copying: 312/1024 [MB] (10 MBps) [2024-12-05T03:07:29.919Z] Copying: 324/1024 [MB] (12 MBps) [2024-12-05T03:07:31.296Z] Copying: 336/1024 [MB] (11 MBps) [2024-12-05T03:07:32.234Z] Copying: 347/1024 [MB] (10 MBps) [2024-12-05T03:07:33.174Z] Copying: 358/1024 [MB] (10 MBps) [2024-12-05T03:07:34.105Z] Copying: 370/1024 [MB] (12 MBps) [2024-12-05T03:07:35.039Z] Copying: 384/1024 [MB] (13 MBps) [2024-12-05T03:07:35.973Z] Copying: 399/1024 [MB] (14 MBps) [2024-12-05T03:07:36.908Z] Copying: 412/1024 [MB] (13 MBps) [2024-12-05T03:07:38.287Z] Copying: 426/1024 [MB] (13 MBps) [2024-12-05T03:07:39.226Z] Copying: 440/1024 [MB] (13 MBps) [2024-12-05T03:07:40.161Z] Copying: 452/1024 [MB] (11 MBps) [2024-12-05T03:07:41.126Z] Copying: 464/1024 [MB] (12 MBps) [2024-12-05T03:07:42.060Z] Copying: 478/1024 [MB] (13 MBps) [2024-12-05T03:07:42.994Z] Copying: 492/1024 [MB] (13 MBps) [2024-12-05T03:07:43.928Z] Copying: 506/1024 [MB] (14 MBps) [2024-12-05T03:07:45.304Z] Copying: 519/1024 [MB] (13 MBps) [2024-12-05T03:07:46.243Z] Copying: 532/1024 [MB] (12 MBps) [2024-12-05T03:07:47.177Z] Copying: 542/1024 [MB] (10 MBps) [2024-12-05T03:07:48.114Z] Copying: 555/1024 [MB] (12 MBps) [2024-12-05T03:07:49.057Z] Copying: 568/1024 [MB] (12 MBps) [2024-12-05T03:07:50.001Z] Copying: 578/1024 [MB] (10 MBps) [2024-12-05T03:07:50.947Z] Copying: 589/1024 [MB] (10 MBps) [2024-12-05T03:07:52.332Z] Copying: 599/1024 [MB] (10 MBps) [2024-12-05T03:07:53.274Z] Copying: 610/1024 [MB] (10 MBps) [2024-12-05T03:07:54.207Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-05T03:07:55.150Z] Copying: 632/1024 [MB] (11 MBps) [2024-12-05T03:07:56.085Z] Copying: 644/1024 [MB] (11 MBps) [2024-12-05T03:07:57.020Z] Copying: 655/1024 [MB] (10 MBps) [2024-12-05T03:07:57.956Z] Copying: 667/1024 [MB] (12 MBps) [2024-12-05T03:07:59.337Z] Copying: 681/1024 [MB] (13 MBps) [2024-12-05T03:08:00.272Z] Copying: 692/1024 [MB] (11 MBps) [2024-12-05T03:08:01.206Z] Copying: 705/1024 [MB] (13 MBps) [2024-12-05T03:08:02.139Z] Copying: 719/1024 [MB] (13 MBps) [2024-12-05T03:08:03.079Z] Copying: 732/1024 [MB] (13 MBps) [2024-12-05T03:08:04.017Z] Copying: 744/1024 [MB] (11 MBps) [2024-12-05T03:08:04.957Z] Copying: 756/1024 [MB] (11 MBps) [2024-12-05T03:08:06.327Z] Copying: 768/1024 [MB] (11 MBps) [2024-12-05T03:08:07.267Z] Copying: 781/1024 [MB] (13 MBps) [2024-12-05T03:08:08.203Z] Copying: 793/1024 [MB] (11 MBps) [2024-12-05T03:08:09.212Z] Copying: 804/1024 [MB] (11 MBps) [2024-12-05T03:08:10.148Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-05T03:08:11.091Z] Copying: 827/1024 [MB] (11 MBps) [2024-12-05T03:08:12.036Z] Copying: 838/1024 [MB] (11 MBps) [2024-12-05T03:08:12.978Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-05T03:08:13.918Z] Copying: 860/1024 [MB] (11 MBps) [2024-12-05T03:08:15.314Z] Copying: 872/1024 [MB] (11 MBps) [2024-12-05T03:08:16.258Z] Copying: 883/1024 [MB] (11 MBps) [2024-12-05T03:08:17.201Z] Copying: 894/1024 [MB] (11 MBps) [2024-12-05T03:08:18.145Z] Copying: 905/1024 [MB] (11 MBps) [2024-12-05T03:08:19.089Z] Copying: 918/1024 [MB] (12 MBps) [2024-12-05T03:08:20.034Z] Copying: 929/1024 [MB] (10 MBps) [2024-12-05T03:08:20.978Z] Copying: 939/1024 [MB] (10 MBps) [2024-12-05T03:08:21.922Z] Copying: 950/1024 [MB] (10 MBps) [2024-12-05T03:08:23.308Z] Copying: 961/1024 [MB] (10 MBps) [2024-12-05T03:08:24.252Z] Copying: 982/1024 [MB] (20 MBps) [2024-12-05T03:08:25.195Z] Copying: 994/1024 [MB] (11 MBps) [2024-12-05T03:08:25.766Z] Copying: 1009/1024 [MB] (15 MBps) [2024-12-05T03:08:25.766Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-05 03:08:25.739859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.922 [2024-12-05 03:08:25.739944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:54.922 [2024-12-05 03:08:25.739961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:54.922 [2024-12-05 03:08:25.739971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.922 [2024-12-05 03:08:25.739996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:54.922 [2024-12-05 03:08:25.743686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.922 [2024-12-05 03:08:25.743737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:54.922 [2024-12-05 03:08:25.743749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:23:54.922 [2024-12-05 03:08:25.743759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.922 [2024-12-05 03:08:25.744012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.922 [2024-12-05 03:08:25.744024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:54.922 [2024-12-05 03:08:25.744034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:23:54.922 [2024-12-05 03:08:25.744043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.922 [2024-12-05 03:08:25.747952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.922 [2024-12-05 03:08:25.747977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:54.922 [2024-12-05 03:08:25.747988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.894 ms 00:23:54.922 [2024-12-05 03:08:25.748002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.922 [2024-12-05 03:08:25.754346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.922 [2024-12-05 03:08:25.754385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:54.922 [2024-12-05 03:08:25.754395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:23:54.922 [2024-12-05 03:08:25.754402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.780747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.780796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:55.184 [2024-12-05 03:08:25.780810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.287 ms 00:23:55.184 [2024-12-05 03:08:25.780818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.797210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.797260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:55.184 [2024-12-05 03:08:25.797273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.345 ms 00:23:55.184 [2024-12-05 03:08:25.797282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.797434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.797446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:55.184 [2024-12-05 03:08:25.797456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:55.184 [2024-12-05 03:08:25.797466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.823151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.823194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:55.184 [2024-12-05 03:08:25.823206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:23:55.184 [2024-12-05 03:08:25.823213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.848370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.848413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:55.184 [2024-12-05 03:08:25.848426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.113 ms 00:23:55.184 [2024-12-05 03:08:25.848434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.872813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.872855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:55.184 [2024-12-05 03:08:25.872867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.334 ms 00:23:55.184 [2024-12-05 03:08:25.872875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.897606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.184 [2024-12-05 03:08:25.897647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:55.184 [2024-12-05 03:08:25.897659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.659 ms 00:23:55.184 [2024-12-05 03:08:25.897666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.184 [2024-12-05 03:08:25.897709] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:55.184 [2024-12-05 03:08:25.897731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.897999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:55.184 [2024-12-05 03:08:25.898104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:55.185 [2024-12-05 03:08:25.898580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:55.185 [2024-12-05 03:08:25.898588] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b76b6fa1-c46b-47b1-9df2-5dec0f5783c7 00:23:55.185 [2024-12-05 03:08:25.898596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:55.185 [2024-12-05 03:08:25.898603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:55.185 [2024-12-05 03:08:25.898611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:55.185 [2024-12-05 03:08:25.898619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:55.185 [2024-12-05 03:08:25.898634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:55.185 [2024-12-05 03:08:25.898642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:55.185 [2024-12-05 03:08:25.898649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:55.185 [2024-12-05 03:08:25.898656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:55.185 [2024-12-05 03:08:25.898663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:55.185 [2024-12-05 03:08:25.898670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.185 [2024-12-05 03:08:25.898678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:55.185 [2024-12-05 03:08:25.898687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:23:55.185 [2024-12-05 03:08:25.898697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.185 [2024-12-05 03:08:25.912388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.185 [2024-12-05 03:08:25.912431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:55.185 [2024-12-05 03:08:25.912442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.672 ms 00:23:55.185 [2024-12-05 03:08:25.912450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.185 [2024-12-05 03:08:25.912868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.185 [2024-12-05 03:08:25.912887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:55.185 [2024-12-05 03:08:25.912905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:23:55.185 [2024-12-05 03:08:25.912913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.185 [2024-12-05 03:08:25.949052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.185 [2024-12-05 03:08:25.949110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.185 [2024-12-05 03:08:25.949121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.185 [2024-12-05 03:08:25.949130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.185 [2024-12-05 03:08:25.949192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.185 [2024-12-05 03:08:25.949202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:55.185 [2024-12-05 03:08:25.949217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.185 [2024-12-05 03:08:25.949225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.185 [2024-12-05 03:08:25.949304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.185 [2024-12-05 03:08:25.949317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:55.185 [2024-12-05 03:08:25.949326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.185 [2024-12-05 03:08:25.949334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.185 [2024-12-05 03:08:25.949351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.185 [2024-12-05 03:08:25.949360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:55.185 [2024-12-05 03:08:25.949367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.185 [2024-12-05 03:08:25.949377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.445 [2024-12-05 03:08:26.035442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.445 [2024-12-05 03:08:26.035495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:55.445 [2024-12-05 03:08:26.035510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.445 [2024-12-05 03:08:26.035519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.104907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.104963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:55.446 [2024-12-05 03:08:26.104983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.104991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.105063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:55.446 [2024-12-05 03:08:26.105087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.105097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.105169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:55.446 [2024-12-05 03:08:26.105179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.105186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.105302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:55.446 [2024-12-05 03:08:26.105311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.105319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.105363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:55.446 [2024-12-05 03:08:26.105372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.105380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.105437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:55.446 [2024-12-05 03:08:26.105446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.105453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.446 [2024-12-05 03:08:26.105511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:55.446 [2024-12-05 03:08:26.105520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.446 [2024-12-05 03:08:26.105528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.446 [2024-12-05 03:08:26.105672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 365.774 ms, result 0 00:23:56.018 00:23:56.018 00:23:56.279 03:08:26 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:58.829 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:58.829 03:08:29 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:58.829 [2024-12-05 03:08:29.146479] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:58.829 [2024-12-05 03:08:29.146617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ] 00:23:58.829 [2024-12-05 03:08:29.310688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.829 [2024-12-05 03:08:29.428104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.092 [2024-12-05 03:08:29.724385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.092 [2024-12-05 03:08:29.724474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.092 [2024-12-05 03:08:29.885563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.885629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.092 [2024-12-05 03:08:29.885644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.092 [2024-12-05 03:08:29.885653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.885708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.885722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.092 [2024-12-05 03:08:29.885731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:59.092 [2024-12-05 03:08:29.885739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.885759] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.092 [2024-12-05 03:08:29.886505] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.092 [2024-12-05 03:08:29.886537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.886545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.092 [2024-12-05 03:08:29.886555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:23:59.092 [2024-12-05 03:08:29.886563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.888226] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.092 [2024-12-05 03:08:29.902509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.902559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.092 [2024-12-05 03:08:29.902573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.285 ms 00:23:59.092 [2024-12-05 03:08:29.902581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.902664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.902674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.092 [2024-12-05 03:08:29.902683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:59.092 [2024-12-05 03:08:29.902691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.910683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.910730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.092 [2024-12-05 03:08:29.910740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.913 ms 00:23:59.092 [2024-12-05 03:08:29.910754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.910831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.910842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.092 [2024-12-05 03:08:29.910851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:59.092 [2024-12-05 03:08:29.910859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.910901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.910911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.092 [2024-12-05 03:08:29.910919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:59.092 [2024-12-05 03:08:29.910927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.910954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.092 [2024-12-05 03:08:29.914898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.914936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.092 [2024-12-05 03:08:29.914950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.950 ms 00:23:59.092 [2024-12-05 03:08:29.914959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.914996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.915011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.092 [2024-12-05 03:08:29.915020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:59.092 [2024-12-05 03:08:29.915027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.915092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.092 [2024-12-05 03:08:29.915119] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:59.092 [2024-12-05 03:08:29.915156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.092 [2024-12-05 03:08:29.915176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:59.092 [2024-12-05 03:08:29.915283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.092 [2024-12-05 03:08:29.915295] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.092 [2024-12-05 03:08:29.915306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:59.092 [2024-12-05 03:08:29.915316] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915326] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.092 [2024-12-05 03:08:29.915343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.092 [2024-12-05 03:08:29.915353] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.092 [2024-12-05 03:08:29.915362] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.092 [2024-12-05 03:08:29.915370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.915378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.092 [2024-12-05 03:08:29.915386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:59.092 [2024-12-05 03:08:29.915393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.915480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.092 [2024-12-05 03:08:29.915490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.092 [2024-12-05 03:08:29.915498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:59.092 [2024-12-05 03:08:29.915505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.092 [2024-12-05 03:08:29.915610] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.092 [2024-12-05 03:08:29.915631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.092 [2024-12-05 03:08:29.915639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.092 [2024-12-05 03:08:29.915664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.092 [2024-12-05 03:08:29.915686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.092 [2024-12-05 03:08:29.915701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.092 [2024-12-05 03:08:29.915708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.092 [2024-12-05 03:08:29.915715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.092 [2024-12-05 03:08:29.915730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.092 [2024-12-05 03:08:29.915738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:59.092 [2024-12-05 03:08:29.915745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.092 [2024-12-05 03:08:29.915760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.092 [2024-12-05 03:08:29.915782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.092 [2024-12-05 03:08:29.915803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.092 [2024-12-05 03:08:29.915810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.092 [2024-12-05 03:08:29.915817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.093 [2024-12-05 03:08:29.915823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.093 [2024-12-05 03:08:29.915830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.093 [2024-12-05 03:08:29.915837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.093 [2024-12-05 03:08:29.915843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:59.093 [2024-12-05 03:08:29.915850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.093 [2024-12-05 03:08:29.915857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.093 [2024-12-05 03:08:29.915864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:59.093 [2024-12-05 03:08:29.915871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.093 [2024-12-05 03:08:29.915878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.093 [2024-12-05 03:08:29.915885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:59.093 [2024-12-05 03:08:29.915891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.093 [2024-12-05 03:08:29.915898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.093 [2024-12-05 03:08:29.915904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:59.093 [2024-12-05 03:08:29.915910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.093 [2024-12-05 03:08:29.915917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.093 [2024-12-05 03:08:29.915923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:59.093 [2024-12-05 03:08:29.915931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.093 [2024-12-05 03:08:29.915939] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.093 [2024-12-05 03:08:29.915947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.093 [2024-12-05 03:08:29.915955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.093 [2024-12-05 03:08:29.915963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.093 [2024-12-05 03:08:29.915971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.093 [2024-12-05 03:08:29.915977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.093 [2024-12-05 03:08:29.915984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.093 [2024-12-05 03:08:29.915991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.093 [2024-12-05 03:08:29.915997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.093 [2024-12-05 03:08:29.916005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.093 [2024-12-05 03:08:29.916014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.093 [2024-12-05 03:08:29.916023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.093 [2024-12-05 03:08:29.916044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:59.093 [2024-12-05 03:08:29.916051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:59.093 [2024-12-05 03:08:29.916059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:59.093 [2024-12-05 03:08:29.916095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:59.093 [2024-12-05 03:08:29.916104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:59.093 [2024-12-05 03:08:29.916111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:59.093 [2024-12-05 03:08:29.916119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:59.093 [2024-12-05 03:08:29.916127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:59.093 [2024-12-05 03:08:29.916134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:59.093 [2024-12-05 03:08:29.916172] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.093 [2024-12-05 03:08:29.916181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.093 [2024-12-05 03:08:29.916196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.093 [2024-12-05 03:08:29.916205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.093 [2024-12-05 03:08:29.916215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.093 [2024-12-05 03:08:29.916223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.093 [2024-12-05 03:08:29.916232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.093 [2024-12-05 03:08:29.916239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:23:59.093 [2024-12-05 03:08:29.916247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.355 [2024-12-05 03:08:29.947893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.355 [2024-12-05 03:08:29.947946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.355 [2024-12-05 03:08:29.947958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.601 ms 00:23:59.355 [2024-12-05 03:08:29.947971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.355 [2024-12-05 03:08:29.948064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.355 [2024-12-05 03:08:29.948095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.355 [2024-12-05 03:08:29.948104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:59.355 [2024-12-05 03:08:29.948113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.355 [2024-12-05 03:08:29.993291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.355 [2024-12-05 03:08:29.993347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.355 [2024-12-05 03:08:29.993360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.116 ms 00:23:59.355 [2024-12-05 03:08:29.993370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.355 [2024-12-05 03:08:29.993418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.355 [2024-12-05 03:08:29.993429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.355 [2024-12-05 03:08:29.993441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.355 [2024-12-05 03:08:29.993449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:29.994044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:29.994101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.356 [2024-12-05 03:08:29.994113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:23:59.356 [2024-12-05 03:08:29.994122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:29.994284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:29.994303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.356 [2024-12-05 03:08:29.994318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:59.356 [2024-12-05 03:08:29.994327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.011828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.011890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.356 [2024-12-05 03:08:30.011903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.481 ms 00:23:59.356 [2024-12-05 03:08:30.011912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.026660] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:59.356 [2024-12-05 03:08:30.026715] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.356 [2024-12-05 03:08:30.026728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.026737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.356 [2024-12-05 03:08:30.026747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.699 ms 00:23:59.356 [2024-12-05 03:08:30.026755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.053535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.053592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.356 [2024-12-05 03:08:30.053606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.719 ms 00:23:59.356 [2024-12-05 03:08:30.053614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.066479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.066531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.356 [2024-12-05 03:08:30.066543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.789 ms 00:23:59.356 [2024-12-05 03:08:30.066551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.078927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.078976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.356 [2024-12-05 03:08:30.078989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.329 ms 00:23:59.356 [2024-12-05 03:08:30.078997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.079676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.079708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.356 [2024-12-05 03:08:30.079722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:23:59.356 [2024-12-05 03:08:30.079731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.147852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.147925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.356 [2024-12-05 03:08:30.147948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.101 ms 00:23:59.356 [2024-12-05 03:08:30.147958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.159133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.356 [2024-12-05 03:08:30.162304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.162347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.356 [2024-12-05 03:08:30.162359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.285 ms 00:23:59.356 [2024-12-05 03:08:30.162367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.162452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.162463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.356 [2024-12-05 03:08:30.162476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:59.356 [2024-12-05 03:08:30.162485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.162559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.162570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.356 [2024-12-05 03:08:30.162580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:59.356 [2024-12-05 03:08:30.162588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.162607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.162616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.356 [2024-12-05 03:08:30.162625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:59.356 [2024-12-05 03:08:30.162633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.162674] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.356 [2024-12-05 03:08:30.162685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.162692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.356 [2024-12-05 03:08:30.162702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:59.356 [2024-12-05 03:08:30.162710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.188293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.188340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.356 [2024-12-05 03:08:30.188360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.565 ms 00:23:59.356 [2024-12-05 03:08:30.188369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.188459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.356 [2024-12-05 03:08:30.188470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.356 [2024-12-05 03:08:30.188480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:59.356 [2024-12-05 03:08:30.188488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.356 [2024-12-05 03:08:30.189754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.702 ms, result 0 00:24:00.743  [2024-12-05T03:08:32.532Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T03:08:33.476Z] Copying: 39/1024 [MB] (20 MBps) [2024-12-05T03:08:34.420Z] Copying: 58/1024 [MB] (19 MBps) [2024-12-05T03:08:35.365Z] Copying: 80/1024 [MB] (21 MBps) [2024-12-05T03:08:36.308Z] Copying: 98/1024 [MB] (18 MBps) [2024-12-05T03:08:37.254Z] Copying: 110/1024 [MB] (12 MBps) [2024-12-05T03:08:38.265Z] Copying: 126/1024 [MB] (15 MBps) [2024-12-05T03:08:39.210Z] Copying: 141/1024 [MB] (15 MBps) [2024-12-05T03:08:40.598Z] Copying: 155/1024 [MB] (13 MBps) [2024-12-05T03:08:41.541Z] Copying: 167/1024 [MB] (11 MBps) [2024-12-05T03:08:42.483Z] Copying: 183/1024 [MB] (15 MBps) [2024-12-05T03:08:43.427Z] Copying: 199/1024 [MB] (15 MBps) [2024-12-05T03:08:44.370Z] Copying: 224/1024 [MB] (25 MBps) [2024-12-05T03:08:45.314Z] Copying: 239/1024 [MB] (15 MBps) [2024-12-05T03:08:46.251Z] Copying: 255/1024 [MB] (16 MBps) [2024-12-05T03:08:47.631Z] Copying: 266/1024 [MB] (10 MBps) [2024-12-05T03:08:48.570Z] Copying: 276/1024 [MB] (10 MBps) [2024-12-05T03:08:49.511Z] Copying: 287/1024 [MB] (10 MBps) [2024-12-05T03:08:50.452Z] Copying: 297/1024 [MB] (10 MBps) [2024-12-05T03:08:51.393Z] Copying: 309/1024 [MB] (11 MBps) [2024-12-05T03:08:52.335Z] Copying: 330/1024 [MB] (21 MBps) [2024-12-05T03:08:53.280Z] Copying: 347/1024 [MB] (16 MBps) [2024-12-05T03:08:54.224Z] Copying: 360/1024 [MB] (13 MBps) [2024-12-05T03:08:55.610Z] Copying: 372/1024 [MB] (12 MBps) [2024-12-05T03:08:56.551Z] Copying: 389/1024 [MB] (16 MBps) [2024-12-05T03:08:57.496Z] Copying: 421/1024 [MB] (32 MBps) [2024-12-05T03:08:58.440Z] Copying: 454/1024 [MB] (32 MBps) [2024-12-05T03:08:59.383Z] Copying: 480/1024 [MB] (25 MBps) [2024-12-05T03:09:00.327Z] Copying: 498/1024 [MB] (18 MBps) [2024-12-05T03:09:01.272Z] Copying: 518/1024 [MB] (19 MBps) [2024-12-05T03:09:02.215Z] Copying: 536/1024 [MB] (17 MBps) [2024-12-05T03:09:03.601Z] Copying: 568/1024 [MB] (32 MBps) [2024-12-05T03:09:04.545Z] Copying: 580/1024 [MB] (11 MBps) [2024-12-05T03:09:05.487Z] Copying: 596/1024 [MB] (16 MBps) [2024-12-05T03:09:06.427Z] Copying: 613/1024 [MB] (17 MBps) [2024-12-05T03:09:07.421Z] Copying: 628/1024 [MB] (14 MBps) [2024-12-05T03:09:08.366Z] Copying: 641/1024 [MB] (13 MBps) [2024-12-05T03:09:09.310Z] Copying: 653/1024 [MB] (12 MBps) [2024-12-05T03:09:10.255Z] Copying: 674/1024 [MB] (20 MBps) [2024-12-05T03:09:11.644Z] Copying: 688/1024 [MB] (14 MBps) [2024-12-05T03:09:12.216Z] Copying: 708/1024 [MB] (19 MBps) [2024-12-05T03:09:13.643Z] Copying: 720/1024 [MB] (12 MBps) [2024-12-05T03:09:14.214Z] Copying: 748/1024 [MB] (28 MBps) [2024-12-05T03:09:15.596Z] Copying: 778/1024 [MB] (29 MBps) [2024-12-05T03:09:16.540Z] Copying: 794/1024 [MB] (16 MBps) [2024-12-05T03:09:17.485Z] Copying: 812/1024 [MB] (17 MBps) [2024-12-05T03:09:18.429Z] Copying: 825/1024 [MB] (13 MBps) [2024-12-05T03:09:19.372Z] Copying: 840/1024 [MB] (14 MBps) [2024-12-05T03:09:20.312Z] Copying: 860/1024 [MB] (19 MBps) [2024-12-05T03:09:21.254Z] Copying: 889/1024 [MB] (28 MBps) [2024-12-05T03:09:22.638Z] Copying: 921/1024 [MB] (32 MBps) [2024-12-05T03:09:23.210Z] Copying: 951/1024 [MB] (29 MBps) [2024-12-05T03:09:24.595Z] Copying: 969/1024 [MB] (18 MBps) [2024-12-05T03:09:25.537Z] Copying: 984/1024 [MB] (15 MBps) [2024-12-05T03:09:26.480Z] Copying: 1017/1024 [MB] (32 MBps) [2024-12-05T03:09:26.480Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 03:09:26.133323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.133375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.636 [2024-12-05 03:09:26.133395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:55.636 [2024-12-05 03:09:26.133402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.134971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.636 [2024-12-05 03:09:26.139447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.139476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.636 [2024-12-05 03:09:26.139486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.436 ms 00:24:55.636 [2024-12-05 03:09:26.139492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.148978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.149007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.636 [2024-12-05 03:09:26.149015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.750 ms 00:24:55.636 [2024-12-05 03:09:26.149026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.167471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.167501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.636 [2024-12-05 03:09:26.167509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.433 ms 00:24:55.636 [2024-12-05 03:09:26.167515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.172283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.172307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.636 [2024-12-05 03:09:26.172315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.747 ms 00:24:55.636 [2024-12-05 03:09:26.172326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.190390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.190428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.636 [2024-12-05 03:09:26.190437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.034 ms 00:24:55.636 [2024-12-05 03:09:26.190442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.201727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.201755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.636 [2024-12-05 03:09:26.201764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.258 ms 00:24:55.636 [2024-12-05 03:09:26.201770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.324935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.324966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.636 [2024-12-05 03:09:26.324975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 123.136 ms 00:24:55.636 [2024-12-05 03:09:26.324981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.343408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.343435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:55.636 [2024-12-05 03:09:26.343444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.415 ms 00:24:55.636 [2024-12-05 03:09:26.343449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.360667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.360693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:55.636 [2024-12-05 03:09:26.360701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.191 ms 00:24:55.636 [2024-12-05 03:09:26.360707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.377728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.377754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.636 [2024-12-05 03:09:26.377761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.997 ms 00:24:55.636 [2024-12-05 03:09:26.377767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.394592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.636 [2024-12-05 03:09:26.394618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.636 [2024-12-05 03:09:26.394625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.783 ms 00:24:55.636 [2024-12-05 03:09:26.394631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.636 [2024-12-05 03:09:26.394655] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.636 [2024-12-05 03:09:26.394665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110848 / 261120 wr_cnt: 1 state: open 00:24:55.636 [2024-12-05 03:09:26.394673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.636 [2024-12-05 03:09:26.394732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.394996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.637 [2024-12-05 03:09:26.395263] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.638 [2024-12-05 03:09:26.395269] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b76b6fa1-c46b-47b1-9df2-5dec0f5783c7 00:24:55.638 [2024-12-05 03:09:26.395275] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110848 00:24:55.638 [2024-12-05 03:09:26.395280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111808 00:24:55.638 [2024-12-05 03:09:26.395286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110848 00:24:55.638 [2024-12-05 03:09:26.395292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:24:55.638 [2024-12-05 03:09:26.395304] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.638 [2024-12-05 03:09:26.395310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.638 [2024-12-05 03:09:26.395315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.638 [2024-12-05 03:09:26.395321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.638 [2024-12-05 03:09:26.395326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.638 [2024-12-05 03:09:26.395331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.638 [2024-12-05 03:09:26.395337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.638 [2024-12-05 03:09:26.395344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:24:55.638 [2024-12-05 03:09:26.395349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.638 [2024-12-05 03:09:26.404657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.638 [2024-12-05 03:09:26.404683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.638 [2024-12-05 03:09:26.404693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.296 ms 00:24:55.638 [2024-12-05 03:09:26.404699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.638 [2024-12-05 03:09:26.404981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.638 [2024-12-05 03:09:26.404993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.638 [2024-12-05 03:09:26.405000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:55.638 [2024-12-05 03:09:26.405006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.638 [2024-12-05 03:09:26.430780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.638 [2024-12-05 03:09:26.430808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.638 [2024-12-05 03:09:26.430816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.638 [2024-12-05 03:09:26.430822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.638 [2024-12-05 03:09:26.430860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.638 [2024-12-05 03:09:26.430866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.638 [2024-12-05 03:09:26.430872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.638 [2024-12-05 03:09:26.430878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.638 [2024-12-05 03:09:26.430927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.638 [2024-12-05 03:09:26.430937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.638 [2024-12-05 03:09:26.430943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.638 [2024-12-05 03:09:26.430949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.638 [2024-12-05 03:09:26.430962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.638 [2024-12-05 03:09:26.430968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.638 [2024-12-05 03:09:26.430973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.638 [2024-12-05 03:09:26.430979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.490861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.490903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.897 [2024-12-05 03:09:26.490912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.490919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.897 [2024-12-05 03:09:26.539420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.897 [2024-12-05 03:09:26.539482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.897 [2024-12-05 03:09:26.539542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.897 [2024-12-05 03:09:26.539629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:55.897 [2024-12-05 03:09:26.539671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.897 [2024-12-05 03:09:26.539715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.897 [2024-12-05 03:09:26.539761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.897 [2024-12-05 03:09:26.539767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.897 [2024-12-05 03:09:26.539773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.897 [2024-12-05 03:09:26.539863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.893 ms, result 0 00:24:56.840 00:24:56.840 00:24:57.100 03:09:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:57.100 [2024-12-05 03:09:27.755264] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:57.100 [2024-12-05 03:09:27.755389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79677 ] 00:24:57.100 [2024-12-05 03:09:27.911095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.359 [2024-12-05 03:09:27.996318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.621 [2024-12-05 03:09:28.206006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:57.621 [2024-12-05 03:09:28.206057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:57.621 [2024-12-05 03:09:28.357201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.357238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:57.621 [2024-12-05 03:09:28.357249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:57.621 [2024-12-05 03:09:28.357255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.357288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.357297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.621 [2024-12-05 03:09:28.357304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:57.621 [2024-12-05 03:09:28.357310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.357322] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:57.621 [2024-12-05 03:09:28.357927] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:57.621 [2024-12-05 03:09:28.357949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.357956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.621 [2024-12-05 03:09:28.357963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:24:57.621 [2024-12-05 03:09:28.357968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.358918] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:57.621 [2024-12-05 03:09:28.368518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.368546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:57.621 [2024-12-05 03:09:28.368555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.601 ms 00:24:57.621 [2024-12-05 03:09:28.368562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.368607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.368614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:57.621 [2024-12-05 03:09:28.368621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:57.621 [2024-12-05 03:09:28.368626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.373021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.373046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.621 [2024-12-05 03:09:28.373053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:24:57.621 [2024-12-05 03:09:28.373063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.373127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.373135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.621 [2024-12-05 03:09:28.373142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:57.621 [2024-12-05 03:09:28.373147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.373179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.373186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:57.621 [2024-12-05 03:09:28.373192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:57.621 [2024-12-05 03:09:28.373197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.373213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:57.621 [2024-12-05 03:09:28.375822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.375845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.621 [2024-12-05 03:09:28.375854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:24:57.621 [2024-12-05 03:09:28.375859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.375887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.375894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:57.621 [2024-12-05 03:09:28.375900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:57.621 [2024-12-05 03:09:28.375906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.375919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:57.621 [2024-12-05 03:09:28.375934] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:57.621 [2024-12-05 03:09:28.375960] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:57.621 [2024-12-05 03:09:28.375973] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:57.621 [2024-12-05 03:09:28.376053] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:57.621 [2024-12-05 03:09:28.376061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:57.621 [2024-12-05 03:09:28.376088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:57.621 [2024-12-05 03:09:28.376096] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:57.621 [2024-12-05 03:09:28.376103] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:57.621 [2024-12-05 03:09:28.376109] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:57.621 [2024-12-05 03:09:28.376115] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:57.621 [2024-12-05 03:09:28.376122] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:57.621 [2024-12-05 03:09:28.376128] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:57.621 [2024-12-05 03:09:28.376134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.376140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:57.621 [2024-12-05 03:09:28.376146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:24:57.621 [2024-12-05 03:09:28.376151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.376214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.621 [2024-12-05 03:09:28.376220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:57.621 [2024-12-05 03:09:28.376226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:57.621 [2024-12-05 03:09:28.376231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.621 [2024-12-05 03:09:28.376306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:57.621 [2024-12-05 03:09:28.376314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:57.621 [2024-12-05 03:09:28.376320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.621 [2024-12-05 03:09:28.376327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.621 [2024-12-05 03:09:28.376333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:57.621 [2024-12-05 03:09:28.376338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:57.621 [2024-12-05 03:09:28.376343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:57.621 [2024-12-05 03:09:28.376349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:57.621 [2024-12-05 03:09:28.376355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:57.621 [2024-12-05 03:09:28.376360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.621 [2024-12-05 03:09:28.376365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:57.621 [2024-12-05 03:09:28.376372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:57.621 [2024-12-05 03:09:28.376378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.621 [2024-12-05 03:09:28.376386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:57.621 [2024-12-05 03:09:28.376392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:57.622 [2024-12-05 03:09:28.376397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:57.622 [2024-12-05 03:09:28.376407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:57.622 [2024-12-05 03:09:28.376422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:57.622 [2024-12-05 03:09:28.376438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:57.622 [2024-12-05 03:09:28.376453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:57.622 [2024-12-05 03:09:28.376468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:57.622 [2024-12-05 03:09:28.376482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.622 [2024-12-05 03:09:28.376492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:57.622 [2024-12-05 03:09:28.376497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:57.622 [2024-12-05 03:09:28.376502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.622 [2024-12-05 03:09:28.376507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:57.622 [2024-12-05 03:09:28.376512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:57.622 [2024-12-05 03:09:28.376517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:57.622 [2024-12-05 03:09:28.376527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:57.622 [2024-12-05 03:09:28.376532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:57.622 [2024-12-05 03:09:28.376544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:57.622 [2024-12-05 03:09:28.376549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.622 [2024-12-05 03:09:28.376560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:57.622 [2024-12-05 03:09:28.376566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:57.622 [2024-12-05 03:09:28.376571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:57.622 [2024-12-05 03:09:28.376576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:57.622 [2024-12-05 03:09:28.376581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:57.622 [2024-12-05 03:09:28.376586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:57.622 [2024-12-05 03:09:28.376592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:57.622 [2024-12-05 03:09:28.376599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:57.622 [2024-12-05 03:09:28.376612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:57.622 [2024-12-05 03:09:28.376617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:57.622 [2024-12-05 03:09:28.376623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:57.622 [2024-12-05 03:09:28.376628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:57.622 [2024-12-05 03:09:28.376634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:57.622 [2024-12-05 03:09:28.376639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:57.622 [2024-12-05 03:09:28.376644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:57.622 [2024-12-05 03:09:28.376649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:57.622 [2024-12-05 03:09:28.376654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:57.622 [2024-12-05 03:09:28.376680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:57.622 [2024-12-05 03:09:28.376687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:57.622 [2024-12-05 03:09:28.376699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:57.622 [2024-12-05 03:09:28.376704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:57.622 [2024-12-05 03:09:28.376709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:57.622 [2024-12-05 03:09:28.376715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.376721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:57.622 [2024-12-05 03:09:28.376727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:24:57.622 [2024-12-05 03:09:28.376732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.397811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.397920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.622 [2024-12-05 03:09:28.397979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.046 ms 00:24:57.622 [2024-12-05 03:09:28.398007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.398365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.398456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:57.622 [2024-12-05 03:09:28.398540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:57.622 [2024-12-05 03:09:28.398567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.434972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.435080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.622 [2024-12-05 03:09:28.435132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.277 ms 00:24:57.622 [2024-12-05 03:09:28.435151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.435208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.435228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.622 [2024-12-05 03:09:28.435253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:57.622 [2024-12-05 03:09:28.435276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.435623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.435706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.622 [2024-12-05 03:09:28.435758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:57.622 [2024-12-05 03:09:28.435837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.435989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.436028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.622 [2024-12-05 03:09:28.436104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:24:57.622 [2024-12-05 03:09:28.436131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.446646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.446736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.622 [2024-12-05 03:09:28.446750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.489 ms 00:24:57.622 [2024-12-05 03:09:28.446756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.622 [2024-12-05 03:09:28.456327] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:57.622 [2024-12-05 03:09:28.456356] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:57.622 [2024-12-05 03:09:28.456365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.622 [2024-12-05 03:09:28.456371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:57.622 [2024-12-05 03:09:28.456378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.540 ms 00:24:57.622 [2024-12-05 03:09:28.456383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.474891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.474920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:57.883 [2024-12-05 03:09:28.474929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.478 ms 00:24:57.883 [2024-12-05 03:09:28.474936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.483780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.483806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:57.883 [2024-12-05 03:09:28.483813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.815 ms 00:24:57.883 [2024-12-05 03:09:28.483819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.492651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.492677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:57.883 [2024-12-05 03:09:28.492684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.807 ms 00:24:57.883 [2024-12-05 03:09:28.492690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.493161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.493176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:57.883 [2024-12-05 03:09:28.493185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:24:57.883 [2024-12-05 03:09:28.493192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.537136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.537268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:57.883 [2024-12-05 03:09:28.537286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.931 ms 00:24:57.883 [2024-12-05 03:09:28.537293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.545044] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:57.883 [2024-12-05 03:09:28.546858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.546883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:57.883 [2024-12-05 03:09:28.546891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.537 ms 00:24:57.883 [2024-12-05 03:09:28.546898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.546952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.546961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:57.883 [2024-12-05 03:09:28.546971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:57.883 [2024-12-05 03:09:28.546978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.548030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.548057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:57.883 [2024-12-05 03:09:28.548066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:24:57.883 [2024-12-05 03:09:28.548084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.548102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.548109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:57.883 [2024-12-05 03:09:28.548115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:57.883 [2024-12-05 03:09:28.548121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.548161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:57.883 [2024-12-05 03:09:28.548170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.548176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:57.883 [2024-12-05 03:09:28.548182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:57.883 [2024-12-05 03:09:28.548187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.565691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.565716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:57.883 [2024-12-05 03:09:28.565728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.490 ms 00:24:57.883 [2024-12-05 03:09:28.565734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.565786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.883 [2024-12-05 03:09:28.565793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:57.883 [2024-12-05 03:09:28.565799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:57.883 [2024-12-05 03:09:28.565805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.883 [2024-12-05 03:09:28.566845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.312 ms, result 0 00:24:59.268  [2024-12-05T03:09:31.061Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T03:09:32.005Z] Copying: 23/1024 [MB] (12 MBps) [2024-12-05T03:09:32.948Z] Copying: 38/1024 [MB] (15 MBps) [2024-12-05T03:09:33.892Z] Copying: 51/1024 [MB] (12 MBps) [2024-12-05T03:09:34.835Z] Copying: 68/1024 [MB] (16 MBps) [2024-12-05T03:09:35.827Z] Copying: 85/1024 [MB] (17 MBps) [2024-12-05T03:09:36.770Z] Copying: 109/1024 [MB] (23 MBps) [2024-12-05T03:09:37.712Z] Copying: 126/1024 [MB] (17 MBps) [2024-12-05T03:09:39.096Z] Copying: 149/1024 [MB] (23 MBps) [2024-12-05T03:09:40.040Z] Copying: 169/1024 [MB] (19 MBps) [2024-12-05T03:09:40.985Z] Copying: 193/1024 [MB] (24 MBps) [2024-12-05T03:09:41.930Z] Copying: 207/1024 [MB] (14 MBps) [2024-12-05T03:09:42.874Z] Copying: 225/1024 [MB] (17 MBps) [2024-12-05T03:09:43.817Z] Copying: 243/1024 [MB] (18 MBps) [2024-12-05T03:09:44.760Z] Copying: 260/1024 [MB] (17 MBps) [2024-12-05T03:09:45.740Z] Copying: 273/1024 [MB] (13 MBps) [2024-12-05T03:09:47.120Z] Copying: 286/1024 [MB] (12 MBps) [2024-12-05T03:09:48.059Z] Copying: 303/1024 [MB] (17 MBps) [2024-12-05T03:09:49.003Z] Copying: 320/1024 [MB] (16 MBps) [2024-12-05T03:09:49.950Z] Copying: 331/1024 [MB] (10 MBps) [2024-12-05T03:09:50.894Z] Copying: 353/1024 [MB] (22 MBps) [2024-12-05T03:09:51.838Z] Copying: 371/1024 [MB] (18 MBps) [2024-12-05T03:09:52.781Z] Copying: 381/1024 [MB] (10 MBps) [2024-12-05T03:09:53.724Z] Copying: 393/1024 [MB] (11 MBps) [2024-12-05T03:09:55.110Z] Copying: 407/1024 [MB] (13 MBps) [2024-12-05T03:09:56.054Z] Copying: 418/1024 [MB] (11 MBps) [2024-12-05T03:09:56.997Z] Copying: 429/1024 [MB] (10 MBps) [2024-12-05T03:09:57.941Z] Copying: 449/1024 [MB] (19 MBps) [2024-12-05T03:09:58.886Z] Copying: 468/1024 [MB] (18 MBps) [2024-12-05T03:09:59.831Z] Copying: 485/1024 [MB] (17 MBps) [2024-12-05T03:10:00.777Z] Copying: 501/1024 [MB] (16 MBps) [2024-12-05T03:10:01.720Z] Copying: 517/1024 [MB] (16 MBps) [2024-12-05T03:10:03.106Z] Copying: 531/1024 [MB] (13 MBps) [2024-12-05T03:10:04.046Z] Copying: 545/1024 [MB] (14 MBps) [2024-12-05T03:10:05.068Z] Copying: 560/1024 [MB] (15 MBps) [2024-12-05T03:10:06.009Z] Copying: 575/1024 [MB] (14 MBps) [2024-12-05T03:10:06.951Z] Copying: 587/1024 [MB] (11 MBps) [2024-12-05T03:10:07.890Z] Copying: 600/1024 [MB] (12 MBps) [2024-12-05T03:10:08.831Z] Copying: 611/1024 [MB] (11 MBps) [2024-12-05T03:10:09.771Z] Copying: 623/1024 [MB] (12 MBps) [2024-12-05T03:10:10.711Z] Copying: 636/1024 [MB] (13 MBps) [2024-12-05T03:10:12.093Z] Copying: 655/1024 [MB] (19 MBps) [2024-12-05T03:10:13.032Z] Copying: 667/1024 [MB] (11 MBps) [2024-12-05T03:10:13.972Z] Copying: 678/1024 [MB] (11 MBps) [2024-12-05T03:10:14.913Z] Copying: 690/1024 [MB] (11 MBps) [2024-12-05T03:10:15.855Z] Copying: 703/1024 [MB] (13 MBps) [2024-12-05T03:10:16.798Z] Copying: 723/1024 [MB] (19 MBps) [2024-12-05T03:10:17.742Z] Copying: 734/1024 [MB] (11 MBps) [2024-12-05T03:10:19.130Z] Copying: 746/1024 [MB] (11 MBps) [2024-12-05T03:10:20.072Z] Copying: 762/1024 [MB] (16 MBps) [2024-12-05T03:10:21.014Z] Copying: 774/1024 [MB] (12 MBps) [2024-12-05T03:10:21.956Z] Copying: 786/1024 [MB] (11 MBps) [2024-12-05T03:10:22.898Z] Copying: 802/1024 [MB] (16 MBps) [2024-12-05T03:10:23.843Z] Copying: 812/1024 [MB] (10 MBps) [2024-12-05T03:10:24.787Z] Copying: 825/1024 [MB] (13 MBps) [2024-12-05T03:10:25.816Z] Copying: 843/1024 [MB] (17 MBps) [2024-12-05T03:10:26.803Z] Copying: 859/1024 [MB] (16 MBps) [2024-12-05T03:10:27.751Z] Copying: 873/1024 [MB] (14 MBps) [2024-12-05T03:10:29.141Z] Copying: 885/1024 [MB] (11 MBps) [2024-12-05T03:10:29.714Z] Copying: 897/1024 [MB] (11 MBps) [2024-12-05T03:10:31.101Z] Copying: 909/1024 [MB] (11 MBps) [2024-12-05T03:10:32.045Z] Copying: 920/1024 [MB] (10 MBps) [2024-12-05T03:10:32.991Z] Copying: 932/1024 [MB] (12 MBps) [2024-12-05T03:10:33.933Z] Copying: 948/1024 [MB] (15 MBps) [2024-12-05T03:10:34.875Z] Copying: 965/1024 [MB] (16 MBps) [2024-12-05T03:10:35.816Z] Copying: 990/1024 [MB] (24 MBps) [2024-12-05T03:10:36.078Z] Copying: 1020/1024 [MB] (30 MBps) [2024-12-05T03:10:36.078Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 03:10:35.918562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.918827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:05.234 [2024-12-05 03:10:35.918913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:05.234 [2024-12-05 03:10:35.919018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.234 [2024-12-05 03:10:35.919083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:05.234 [2024-12-05 03:10:35.922556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.922679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:05.234 [2024-12-05 03:10:35.922742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:26:05.234 [2024-12-05 03:10:35.922767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.234 [2024-12-05 03:10:35.923026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.923540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:05.234 [2024-12-05 03:10:35.923647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:26:05.234 [2024-12-05 03:10:35.924323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.234 [2024-12-05 03:10:35.929832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.929942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:05.234 [2024-12-05 03:10:35.929959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.474 ms 00:26:05.234 [2024-12-05 03:10:35.929968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.234 [2024-12-05 03:10:35.936643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.936731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:05.234 [2024-12-05 03:10:35.936781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.641 ms 00:26:05.234 [2024-12-05 03:10:35.936829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.234 [2024-12-05 03:10:35.961550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.961659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:05.234 [2024-12-05 03:10:35.961712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.666 ms 00:26:05.234 [2024-12-05 03:10:35.961734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.234 [2024-12-05 03:10:35.976323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.234 [2024-12-05 03:10:35.976426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:05.234 [2024-12-05 03:10:35.976475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.531 ms 00:26:05.234 [2024-12-05 03:10:35.976497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.496 [2024-12-05 03:10:36.147174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.496 [2024-12-05 03:10:36.147272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:05.496 [2024-12-05 03:10:36.147321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.633 ms 00:26:05.496 [2024-12-05 03:10:36.147342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.496 [2024-12-05 03:10:36.170505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.496 [2024-12-05 03:10:36.170607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:05.496 [2024-12-05 03:10:36.170656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.134 ms 00:26:05.496 [2024-12-05 03:10:36.170677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.496 [2024-12-05 03:10:36.193251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.496 [2024-12-05 03:10:36.193352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:05.496 [2024-12-05 03:10:36.193401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.538 ms 00:26:05.496 [2024-12-05 03:10:36.193423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.496 [2024-12-05 03:10:36.215658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.496 [2024-12-05 03:10:36.215762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:05.496 [2024-12-05 03:10:36.215811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.199 ms 00:26:05.496 [2024-12-05 03:10:36.215832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.496 [2024-12-05 03:10:36.238604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.496 [2024-12-05 03:10:36.238706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:05.496 [2024-12-05 03:10:36.238755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.712 ms 00:26:05.496 [2024-12-05 03:10:36.238776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.496 [2024-12-05 03:10:36.238886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:05.496 [2024-12-05 03:10:36.238922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:26:05.496 [2024-12-05 03:10:36.238955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.239957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:05.496 [2024-12-05 03:10:36.240520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.240966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.241996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:05.497 [2024-12-05 03:10:36.242090] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:05.497 [2024-12-05 03:10:36.242099] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b76b6fa1-c46b-47b1-9df2-5dec0f5783c7 00:26:05.497 [2024-12-05 03:10:36.242107] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:26:05.497 [2024-12-05 03:10:36.242115] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21696 00:26:05.497 [2024-12-05 03:10:36.242123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20736 00:26:05.497 [2024-12-05 03:10:36.242131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0463 00:26:05.497 [2024-12-05 03:10:36.242146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:05.497 [2024-12-05 03:10:36.242161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:05.497 [2024-12-05 03:10:36.242168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:05.497 [2024-12-05 03:10:36.242174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:05.497 [2024-12-05 03:10:36.242181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:05.497 [2024-12-05 03:10:36.242189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.497 [2024-12-05 03:10:36.242198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:05.497 [2024-12-05 03:10:36.242207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.305 ms 00:26:05.497 [2024-12-05 03:10:36.242214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.497 [2024-12-05 03:10:36.255525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.497 [2024-12-05 03:10:36.255621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:05.497 [2024-12-05 03:10:36.255676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.284 ms 00:26:05.497 [2024-12-05 03:10:36.255698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.497 [2024-12-05 03:10:36.256157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.497 [2024-12-05 03:10:36.256237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:05.497 [2024-12-05 03:10:36.256287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:26:05.498 [2024-12-05 03:10:36.256311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.498 [2024-12-05 03:10:36.291384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.498 [2024-12-05 03:10:36.291490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.498 [2024-12-05 03:10:36.291539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.498 [2024-12-05 03:10:36.291562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.498 [2024-12-05 03:10:36.291630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.498 [2024-12-05 03:10:36.291654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.498 [2024-12-05 03:10:36.291676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.498 [2024-12-05 03:10:36.291697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.498 [2024-12-05 03:10:36.291761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.498 [2024-12-05 03:10:36.291787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.498 [2024-12-05 03:10:36.291813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.498 [2024-12-05 03:10:36.291876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.498 [2024-12-05 03:10:36.291907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.498 [2024-12-05 03:10:36.291928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.498 [2024-12-05 03:10:36.291948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.498 [2024-12-05 03:10:36.291967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.373670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.373816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.758 [2024-12-05 03:10:36.373864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.373886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.440790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.440925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.758 [2024-12-05 03:10:36.440980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.441114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.441143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.758 [2024-12-05 03:10:36.441163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.441232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.441340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.758 [2024-12-05 03:10:36.441361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.441487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.441539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.758 [2024-12-05 03:10:36.441559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.441669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.441694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:05.758 [2024-12-05 03:10:36.441753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.441827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.441872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.758 [2024-12-05 03:10:36.441883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.441940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.758 [2024-12-05 03:10:36.441951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.758 [2024-12-05 03:10:36.441959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.758 [2024-12-05 03:10:36.441968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.758 [2024-12-05 03:10:36.442114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.500 ms, result 0 00:26:06.331 00:26:06.331 00:26:06.592 03:10:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:08.507 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:08.507 03:10:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:08.507 03:10:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:08.507 03:10:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:08.507 03:10:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:08.507 03:10:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:08.507 03:10:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77230 00:26:08.507 03:10:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77230 ']' 00:26:08.507 03:10:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77230 00:26:08.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77230) - No such process 00:26:08.508 Process with pid 77230 is not found 00:26:08.508 03:10:39 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77230 is not found' 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:08.508 Remove shared memory files 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:08.508 03:10:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:08.508 00:26:08.508 real 5m8.001s 00:26:08.508 user 4m56.317s 00:26:08.508 sys 0m11.624s 00:26:08.508 03:10:39 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.508 03:10:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:08.508 ************************************ 00:26:08.508 END TEST ftl_restore 00:26:08.508 ************************************ 00:26:08.508 03:10:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:08.508 03:10:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:08.508 03:10:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.508 03:10:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:08.508 ************************************ 00:26:08.508 START TEST ftl_dirty_shutdown 00:26:08.508 ************************************ 00:26:08.508 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:08.772 * Looking for test storage... 00:26:08.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.772 --rc genhtml_branch_coverage=1 00:26:08.772 --rc genhtml_function_coverage=1 00:26:08.772 --rc genhtml_legend=1 00:26:08.772 --rc geninfo_all_blocks=1 00:26:08.772 --rc geninfo_unexecuted_blocks=1 00:26:08.772 00:26:08.772 ' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.772 --rc genhtml_branch_coverage=1 00:26:08.772 --rc genhtml_function_coverage=1 00:26:08.772 --rc genhtml_legend=1 00:26:08.772 --rc geninfo_all_blocks=1 00:26:08.772 --rc geninfo_unexecuted_blocks=1 00:26:08.772 00:26:08.772 ' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.772 --rc genhtml_branch_coverage=1 00:26:08.772 --rc genhtml_function_coverage=1 00:26:08.772 --rc genhtml_legend=1 00:26:08.772 --rc geninfo_all_blocks=1 00:26:08.772 --rc geninfo_unexecuted_blocks=1 00:26:08.772 00:26:08.772 ' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:08.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.772 --rc genhtml_branch_coverage=1 00:26:08.772 --rc genhtml_function_coverage=1 00:26:08.772 --rc genhtml_legend=1 00:26:08.772 --rc geninfo_all_blocks=1 00:26:08.772 --rc geninfo_unexecuted_blocks=1 00:26:08.772 00:26:08.772 ' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:08.772 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80475 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80475 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80475 ']' 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:08.773 03:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:08.773 [2024-12-05 03:10:39.571910] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:08.773 [2024-12-05 03:10:39.572023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80475 ] 00:26:09.034 [2024-12-05 03:10:39.725977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.034 [2024-12-05 03:10:39.829274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:09.979 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:10.241 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:10.241 { 00:26:10.241 "name": "nvme0n1", 00:26:10.241 "aliases": [ 00:26:10.241 "0496b553-4caf-4318-b492-75aae65fd5dd" 00:26:10.241 ], 00:26:10.241 "product_name": "NVMe disk", 00:26:10.241 "block_size": 4096, 00:26:10.241 "num_blocks": 1310720, 00:26:10.241 "uuid": "0496b553-4caf-4318-b492-75aae65fd5dd", 00:26:10.241 "numa_id": -1, 00:26:10.241 "assigned_rate_limits": { 00:26:10.241 "rw_ios_per_sec": 0, 00:26:10.241 "rw_mbytes_per_sec": 0, 00:26:10.241 "r_mbytes_per_sec": 0, 00:26:10.241 "w_mbytes_per_sec": 0 00:26:10.241 }, 00:26:10.241 "claimed": true, 00:26:10.241 "claim_type": "read_many_write_one", 00:26:10.241 "zoned": false, 00:26:10.241 "supported_io_types": { 00:26:10.241 "read": true, 00:26:10.241 "write": true, 00:26:10.241 "unmap": true, 00:26:10.241 "flush": true, 00:26:10.241 "reset": true, 00:26:10.241 "nvme_admin": true, 00:26:10.241 "nvme_io": true, 00:26:10.241 "nvme_io_md": false, 00:26:10.241 "write_zeroes": true, 00:26:10.241 "zcopy": false, 00:26:10.241 "get_zone_info": false, 00:26:10.241 "zone_management": false, 00:26:10.241 "zone_append": false, 00:26:10.241 "compare": true, 00:26:10.241 "compare_and_write": false, 00:26:10.241 "abort": true, 00:26:10.241 "seek_hole": false, 00:26:10.241 "seek_data": false, 00:26:10.241 "copy": true, 00:26:10.241 "nvme_iov_md": false 00:26:10.241 }, 00:26:10.241 "driver_specific": { 00:26:10.241 "nvme": [ 00:26:10.241 { 00:26:10.241 "pci_address": "0000:00:11.0", 00:26:10.241 "trid": { 00:26:10.241 "trtype": "PCIe", 00:26:10.241 "traddr": "0000:00:11.0" 00:26:10.241 }, 00:26:10.241 "ctrlr_data": { 00:26:10.241 "cntlid": 0, 00:26:10.241 "vendor_id": "0x1b36", 00:26:10.241 "model_number": "QEMU NVMe Ctrl", 00:26:10.241 "serial_number": "12341", 00:26:10.241 "firmware_revision": "8.0.0", 00:26:10.241 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:10.241 "oacs": { 00:26:10.241 "security": 0, 00:26:10.241 "format": 1, 00:26:10.241 "firmware": 0, 00:26:10.241 "ns_manage": 1 00:26:10.241 }, 00:26:10.241 "multi_ctrlr": false, 00:26:10.241 "ana_reporting": false 00:26:10.241 }, 00:26:10.241 "vs": { 00:26:10.241 "nvme_version": "1.4" 00:26:10.241 }, 00:26:10.241 "ns_data": { 00:26:10.241 "id": 1, 00:26:10.241 "can_share": false 00:26:10.241 } 00:26:10.241 } 00:26:10.241 ], 00:26:10.241 "mp_policy": "active_passive" 00:26:10.241 } 00:26:10.241 } 00:26:10.241 ]' 00:26:10.241 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:10.241 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:10.241 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:10.242 03:10:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:10.503 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b5cdcd59-7bd0-4135-a751-6287ffad22b2 00:26:10.503 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:10.503 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5cdcd59-7bd0-4135-a751-6287ffad22b2 00:26:10.503 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:10.763 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=02b5c5cc-bfca-4040-8f62-8f72ab0143da 00:26:10.763 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 02b5c5cc-bfca-4040-8f62-8f72ab0143da 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:11.022 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.281 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:11.281 { 00:26:11.281 "name": "b5fa5e1c-36af-4b18-957f-58f911815686", 00:26:11.281 "aliases": [ 00:26:11.281 "lvs/nvme0n1p0" 00:26:11.281 ], 00:26:11.281 "product_name": "Logical Volume", 00:26:11.281 "block_size": 4096, 00:26:11.281 "num_blocks": 26476544, 00:26:11.281 "uuid": "b5fa5e1c-36af-4b18-957f-58f911815686", 00:26:11.281 "assigned_rate_limits": { 00:26:11.281 "rw_ios_per_sec": 0, 00:26:11.281 "rw_mbytes_per_sec": 0, 00:26:11.281 "r_mbytes_per_sec": 0, 00:26:11.281 "w_mbytes_per_sec": 0 00:26:11.281 }, 00:26:11.281 "claimed": false, 00:26:11.281 "zoned": false, 00:26:11.281 "supported_io_types": { 00:26:11.281 "read": true, 00:26:11.281 "write": true, 00:26:11.281 "unmap": true, 00:26:11.281 "flush": false, 00:26:11.281 "reset": true, 00:26:11.281 "nvme_admin": false, 00:26:11.281 "nvme_io": false, 00:26:11.281 "nvme_io_md": false, 00:26:11.281 "write_zeroes": true, 00:26:11.281 "zcopy": false, 00:26:11.281 "get_zone_info": false, 00:26:11.281 "zone_management": false, 00:26:11.281 "zone_append": false, 00:26:11.281 "compare": false, 00:26:11.281 "compare_and_write": false, 00:26:11.281 "abort": false, 00:26:11.281 "seek_hole": true, 00:26:11.281 "seek_data": true, 00:26:11.281 "copy": false, 00:26:11.281 "nvme_iov_md": false 00:26:11.281 }, 00:26:11.281 "driver_specific": { 00:26:11.281 "lvol": { 00:26:11.281 "lvol_store_uuid": "02b5c5cc-bfca-4040-8f62-8f72ab0143da", 00:26:11.281 "base_bdev": "nvme0n1", 00:26:11.281 "thin_provision": true, 00:26:11.281 "num_allocated_clusters": 0, 00:26:11.281 "snapshot": false, 00:26:11.281 "clone": false, 00:26:11.281 "esnap_clone": false 00:26:11.281 } 00:26:11.281 } 00:26:11.281 } 00:26:11.281 ]' 00:26:11.281 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:11.281 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:11.281 03:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:11.281 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:11.281 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:11.281 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:11.282 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:11.282 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:11.282 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:11.541 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5fa5e1c-36af-4b18-957f-58f911815686 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:11.800 { 00:26:11.800 "name": "b5fa5e1c-36af-4b18-957f-58f911815686", 00:26:11.800 "aliases": [ 00:26:11.800 "lvs/nvme0n1p0" 00:26:11.800 ], 00:26:11.800 "product_name": "Logical Volume", 00:26:11.800 "block_size": 4096, 00:26:11.800 "num_blocks": 26476544, 00:26:11.800 "uuid": "b5fa5e1c-36af-4b18-957f-58f911815686", 00:26:11.800 "assigned_rate_limits": { 00:26:11.800 "rw_ios_per_sec": 0, 00:26:11.800 "rw_mbytes_per_sec": 0, 00:26:11.800 "r_mbytes_per_sec": 0, 00:26:11.800 "w_mbytes_per_sec": 0 00:26:11.800 }, 00:26:11.800 "claimed": false, 00:26:11.800 "zoned": false, 00:26:11.800 "supported_io_types": { 00:26:11.800 "read": true, 00:26:11.800 "write": true, 00:26:11.800 "unmap": true, 00:26:11.800 "flush": false, 00:26:11.800 "reset": true, 00:26:11.800 "nvme_admin": false, 00:26:11.800 "nvme_io": false, 00:26:11.800 "nvme_io_md": false, 00:26:11.800 "write_zeroes": true, 00:26:11.800 "zcopy": false, 00:26:11.800 "get_zone_info": false, 00:26:11.800 "zone_management": false, 00:26:11.800 "zone_append": false, 00:26:11.800 "compare": false, 00:26:11.800 "compare_and_write": false, 00:26:11.800 "abort": false, 00:26:11.800 "seek_hole": true, 00:26:11.800 "seek_data": true, 00:26:11.800 "copy": false, 00:26:11.800 "nvme_iov_md": false 00:26:11.800 }, 00:26:11.800 "driver_specific": { 00:26:11.800 "lvol": { 00:26:11.800 "lvol_store_uuid": "02b5c5cc-bfca-4040-8f62-8f72ab0143da", 00:26:11.800 "base_bdev": "nvme0n1", 00:26:11.800 "thin_provision": true, 00:26:11.800 "num_allocated_clusters": 0, 00:26:11.800 "snapshot": false, 00:26:11.800 "clone": false, 00:26:11.800 "esnap_clone": false 00:26:11.800 } 00:26:11.800 } 00:26:11.800 } 00:26:11.800 ]' 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:11.800 03:10:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b5fa5e1c-36af-4b18-957f-58f911815686 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b5fa5e1c-36af-4b18-957f-58f911815686 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:12.059 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5fa5e1c-36af-4b18-957f-58f911815686 00:26:12.318 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:12.318 { 00:26:12.318 "name": "b5fa5e1c-36af-4b18-957f-58f911815686", 00:26:12.318 "aliases": [ 00:26:12.318 "lvs/nvme0n1p0" 00:26:12.318 ], 00:26:12.318 "product_name": "Logical Volume", 00:26:12.318 "block_size": 4096, 00:26:12.318 "num_blocks": 26476544, 00:26:12.318 "uuid": "b5fa5e1c-36af-4b18-957f-58f911815686", 00:26:12.318 "assigned_rate_limits": { 00:26:12.318 "rw_ios_per_sec": 0, 00:26:12.318 "rw_mbytes_per_sec": 0, 00:26:12.318 "r_mbytes_per_sec": 0, 00:26:12.318 "w_mbytes_per_sec": 0 00:26:12.318 }, 00:26:12.318 "claimed": false, 00:26:12.318 "zoned": false, 00:26:12.318 "supported_io_types": { 00:26:12.318 "read": true, 00:26:12.318 "write": true, 00:26:12.318 "unmap": true, 00:26:12.318 "flush": false, 00:26:12.318 "reset": true, 00:26:12.318 "nvme_admin": false, 00:26:12.318 "nvme_io": false, 00:26:12.318 "nvme_io_md": false, 00:26:12.318 "write_zeroes": true, 00:26:12.318 "zcopy": false, 00:26:12.318 "get_zone_info": false, 00:26:12.318 "zone_management": false, 00:26:12.318 "zone_append": false, 00:26:12.318 "compare": false, 00:26:12.318 "compare_and_write": false, 00:26:12.318 "abort": false, 00:26:12.318 "seek_hole": true, 00:26:12.318 "seek_data": true, 00:26:12.318 "copy": false, 00:26:12.318 "nvme_iov_md": false 00:26:12.318 }, 00:26:12.318 "driver_specific": { 00:26:12.318 "lvol": { 00:26:12.318 "lvol_store_uuid": "02b5c5cc-bfca-4040-8f62-8f72ab0143da", 00:26:12.318 "base_bdev": "nvme0n1", 00:26:12.318 "thin_provision": true, 00:26:12.318 "num_allocated_clusters": 0, 00:26:12.318 "snapshot": false, 00:26:12.318 "clone": false, 00:26:12.318 "esnap_clone": false 00:26:12.318 } 00:26:12.318 } 00:26:12.318 } 00:26:12.318 ]' 00:26:12.318 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:12.318 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:12.318 03:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b5fa5e1c-36af-4b18-957f-58f911815686 --l2p_dram_limit 10' 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:12.318 03:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b5fa5e1c-36af-4b18-957f-58f911815686 --l2p_dram_limit 10 -c nvc0n1p0 00:26:12.579 [2024-12-05 03:10:43.216691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.579 [2024-12-05 03:10:43.216730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:12.579 [2024-12-05 03:10:43.216744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:12.579 [2024-12-05 03:10:43.216751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.579 [2024-12-05 03:10:43.216788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.579 [2024-12-05 03:10:43.216796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.579 [2024-12-05 03:10:43.216804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:12.579 [2024-12-05 03:10:43.216810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.579 [2024-12-05 03:10:43.216829] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:12.579 [2024-12-05 03:10:43.217366] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:12.580 [2024-12-05 03:10:43.217384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.217391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.580 [2024-12-05 03:10:43.217399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:26:12.580 [2024-12-05 03:10:43.217405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.217428] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4b01abfe-7442-4e7e-85e3-29dd0e69b26c 00:26:12.580 [2024-12-05 03:10:43.218672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.218692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:12.580 [2024-12-05 03:10:43.218701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:12.580 [2024-12-05 03:10:43.218711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.225500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.225527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.580 [2024-12-05 03:10:43.225535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.732 ms 00:26:12.580 [2024-12-05 03:10:43.225542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.225610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.225620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.580 [2024-12-05 03:10:43.225627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:12.580 [2024-12-05 03:10:43.225637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.225676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.225686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:12.580 [2024-12-05 03:10:43.225694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:12.580 [2024-12-05 03:10:43.225701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.225717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:12.580 [2024-12-05 03:10:43.228923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.228946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.580 [2024-12-05 03:10:43.228956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:26:12.580 [2024-12-05 03:10:43.228962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.228999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.229007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:12.580 [2024-12-05 03:10:43.229015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:12.580 [2024-12-05 03:10:43.229021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.229036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:12.580 [2024-12-05 03:10:43.229160] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:12.580 [2024-12-05 03:10:43.229174] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:12.580 [2024-12-05 03:10:43.229184] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:12.580 [2024-12-05 03:10:43.229194] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229201] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229209] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:12.580 [2024-12-05 03:10:43.229215] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:12.580 [2024-12-05 03:10:43.229226] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:12.580 [2024-12-05 03:10:43.229232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:12.580 [2024-12-05 03:10:43.229240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.229251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:12.580 [2024-12-05 03:10:43.229258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:26:12.580 [2024-12-05 03:10:43.229264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.229332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.580 [2024-12-05 03:10:43.229339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:12.580 [2024-12-05 03:10:43.229347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:12.580 [2024-12-05 03:10:43.229352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.580 [2024-12-05 03:10:43.229434] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:12.580 [2024-12-05 03:10:43.229442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:12.580 [2024-12-05 03:10:43.229450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:12.580 [2024-12-05 03:10:43.229470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:12.580 [2024-12-05 03:10:43.229492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:12.580 [2024-12-05 03:10:43.229504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:12.580 [2024-12-05 03:10:43.229509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:12.580 [2024-12-05 03:10:43.229517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:12.580 [2024-12-05 03:10:43.229522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:12.580 [2024-12-05 03:10:43.229529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:12.580 [2024-12-05 03:10:43.229534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:12.580 [2024-12-05 03:10:43.229549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:12.580 [2024-12-05 03:10:43.229567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:12.580 [2024-12-05 03:10:43.229584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:12.580 [2024-12-05 03:10:43.229603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:12.580 [2024-12-05 03:10:43.229624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:12.580 [2024-12-05 03:10:43.229631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:12.580 [2024-12-05 03:10:43.229637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:12.581 [2024-12-05 03:10:43.229645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:12.581 [2024-12-05 03:10:43.229650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:12.581 [2024-12-05 03:10:43.229658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:12.581 [2024-12-05 03:10:43.229663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:12.581 [2024-12-05 03:10:43.229670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:12.581 [2024-12-05 03:10:43.229675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:12.581 [2024-12-05 03:10:43.229682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:12.581 [2024-12-05 03:10:43.229686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:12.581 [2024-12-05 03:10:43.229693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:12.581 [2024-12-05 03:10:43.229698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:12.581 [2024-12-05 03:10:43.229705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:12.581 [2024-12-05 03:10:43.229711] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:12.581 [2024-12-05 03:10:43.229719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:12.581 [2024-12-05 03:10:43.229725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:12.581 [2024-12-05 03:10:43.229732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:12.581 [2024-12-05 03:10:43.229739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:12.581 [2024-12-05 03:10:43.229747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:12.581 [2024-12-05 03:10:43.229753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:12.581 [2024-12-05 03:10:43.229759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:12.581 [2024-12-05 03:10:43.229764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:12.581 [2024-12-05 03:10:43.229771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:12.581 [2024-12-05 03:10:43.229778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:12.581 [2024-12-05 03:10:43.229788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:12.581 [2024-12-05 03:10:43.229802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:12.581 [2024-12-05 03:10:43.229808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:12.581 [2024-12-05 03:10:43.229814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:12.581 [2024-12-05 03:10:43.229820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:12.581 [2024-12-05 03:10:43.229830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:12.581 [2024-12-05 03:10:43.229835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:12.581 [2024-12-05 03:10:43.229843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:12.581 [2024-12-05 03:10:43.229848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:12.581 [2024-12-05 03:10:43.229857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:12.581 [2024-12-05 03:10:43.229887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:12.581 [2024-12-05 03:10:43.229895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:12.581 [2024-12-05 03:10:43.229910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:12.581 [2024-12-05 03:10:43.229915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:12.581 [2024-12-05 03:10:43.229922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:12.581 [2024-12-05 03:10:43.229928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.581 [2024-12-05 03:10:43.229936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:12.581 [2024-12-05 03:10:43.229941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:26:12.581 [2024-12-05 03:10:43.229948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.581 [2024-12-05 03:10:43.229988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:12.581 [2024-12-05 03:10:43.229999] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:15.881 [2024-12-05 03:10:46.031924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.881 [2024-12-05 03:10:46.032014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:15.881 [2024-12-05 03:10:46.032034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2801.921 ms 00:26:15.881 [2024-12-05 03:10:46.032046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.881 [2024-12-05 03:10:46.065102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.881 [2024-12-05 03:10:46.065152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.882 [2024-12-05 03:10:46.065166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.773 ms 00:26:15.882 [2024-12-05 03:10:46.065176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.065306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.065320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:15.882 [2024-12-05 03:10:46.065329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:15.882 [2024-12-05 03:10:46.065345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.099273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.099312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.882 [2024-12-05 03:10:46.099323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.894 ms 00:26:15.882 [2024-12-05 03:10:46.099333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.099363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.099377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.882 [2024-12-05 03:10:46.099386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:15.882 [2024-12-05 03:10:46.099403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.099859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.099881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.882 [2024-12-05 03:10:46.099891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:26:15.882 [2024-12-05 03:10:46.099901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.100005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.100018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.882 [2024-12-05 03:10:46.100029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:15.882 [2024-12-05 03:10:46.100040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.115625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.115656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.882 [2024-12-05 03:10:46.115666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.568 ms 00:26:15.882 [2024-12-05 03:10:46.115676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.146681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:15.882 [2024-12-05 03:10:46.149939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.149968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:15.882 [2024-12-05 03:10:46.149982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.187 ms 00:26:15.882 [2024-12-05 03:10:46.149990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.228525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.228563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:15.882 [2024-12-05 03:10:46.228578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.495 ms 00:26:15.882 [2024-12-05 03:10:46.228587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.228776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.228790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:15.882 [2024-12-05 03:10:46.228804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:26:15.882 [2024-12-05 03:10:46.228813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.252959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.253002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:15.882 [2024-12-05 03:10:46.253016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.098 ms 00:26:15.882 [2024-12-05 03:10:46.253026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.276357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.276388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:15.882 [2024-12-05 03:10:46.276402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.288 ms 00:26:15.882 [2024-12-05 03:10:46.276410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.276998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.277015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:15.882 [2024-12-05 03:10:46.277027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:26:15.882 [2024-12-05 03:10:46.277037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.358019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.358064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:15.882 [2024-12-05 03:10:46.358097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.941 ms 00:26:15.882 [2024-12-05 03:10:46.358107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.386293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.386339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:15.882 [2024-12-05 03:10:46.386355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.091 ms 00:26:15.882 [2024-12-05 03:10:46.386364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.411905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.411950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:15.882 [2024-12-05 03:10:46.411966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:26:15.882 [2024-12-05 03:10:46.411975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.438108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.438152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:15.882 [2024-12-05 03:10:46.438169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.078 ms 00:26:15.882 [2024-12-05 03:10:46.438178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.438237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.438248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:15.882 [2024-12-05 03:10:46.438264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:15.882 [2024-12-05 03:10:46.438273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.438377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.882 [2024-12-05 03:10:46.438393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:15.882 [2024-12-05 03:10:46.438405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:15.882 [2024-12-05 03:10:46.438415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.882 [2024-12-05 03:10:46.440247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3222.887 ms, result 0 00:26:15.882 { 00:26:15.882 "name": "ftl0", 00:26:15.882 "uuid": "4b01abfe-7442-4e7e-85e3-29dd0e69b26c" 00:26:15.882 } 00:26:15.882 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:15.882 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:15.882 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:15.882 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:15.882 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:16.144 /dev/nbd0 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:16.144 1+0 records in 00:26:16.144 1+0 records out 00:26:16.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578713 s, 7.1 MB/s 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:16.144 03:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:16.404 [2024-12-05 03:10:47.024146] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:16.404 [2024-12-05 03:10:47.024272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80629 ] 00:26:16.404 [2024-12-05 03:10:47.187194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.666 [2024-12-05 03:10:47.313108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.048  [2024-12-05T03:10:49.828Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-05T03:10:50.764Z] Copying: 390/1024 [MB] (196 MBps) [2024-12-05T03:10:51.700Z] Copying: 587/1024 [MB] (196 MBps) [2024-12-05T03:10:52.635Z] Copying: 801/1024 [MB] (214 MBps) [2024-12-05T03:10:53.200Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:26:22.356 00:26:22.356 03:10:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:24.256 03:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:24.256 [2024-12-05 03:10:54.699732] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:24.256 [2024-12-05 03:10:54.699826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80715 ] 00:26:24.256 [2024-12-05 03:10:54.848764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.256 [2024-12-05 03:10:54.924907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.637  [2024-12-05T03:10:57.417Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-05T03:10:58.359Z] Copying: 42/1024 [MB] (19 MBps) [2024-12-05T03:10:59.302Z] Copying: 59/1024 [MB] (17 MBps) [2024-12-05T03:11:00.243Z] Copying: 77/1024 [MB] (17 MBps) [2024-12-05T03:11:01.187Z] Copying: 102/1024 [MB] (25 MBps) [2024-12-05T03:11:02.132Z] Copying: 123/1024 [MB] (20 MBps) [2024-12-05T03:11:03.524Z] Copying: 134/1024 [MB] (11 MBps) [2024-12-05T03:11:04.458Z] Copying: 147404/1048576 [kB] (9372 kBps) [2024-12-05T03:11:05.392Z] Copying: 158/1024 [MB] (14 MBps) [2024-12-05T03:11:06.364Z] Copying: 172116/1048576 [kB] (10012 kBps) [2024-12-05T03:11:07.322Z] Copying: 182/1024 [MB] (14 MBps) [2024-12-05T03:11:08.256Z] Copying: 200/1024 [MB] (17 MBps) [2024-12-05T03:11:09.190Z] Copying: 213/1024 [MB] (13 MBps) [2024-12-05T03:11:10.123Z] Copying: 235/1024 [MB] (21 MBps) [2024-12-05T03:11:11.493Z] Copying: 256/1024 [MB] (20 MBps) [2024-12-05T03:11:12.427Z] Copying: 275/1024 [MB] (19 MBps) [2024-12-05T03:11:13.362Z] Copying: 307/1024 [MB] (31 MBps) [2024-12-05T03:11:14.297Z] Copying: 328/1024 [MB] (20 MBps) [2024-12-05T03:11:15.232Z] Copying: 353/1024 [MB] (25 MBps) [2024-12-05T03:11:16.163Z] Copying: 376/1024 [MB] (22 MBps) [2024-12-05T03:11:17.533Z] Copying: 406/1024 [MB] (30 MBps) [2024-12-05T03:11:18.100Z] Copying: 430/1024 [MB] (23 MBps) [2024-12-05T03:11:19.475Z] Copying: 464/1024 [MB] (34 MBps) [2024-12-05T03:11:20.410Z] Copying: 484/1024 [MB] (19 MBps) [2024-12-05T03:11:21.343Z] Copying: 505/1024 [MB] (21 MBps) [2024-12-05T03:11:22.275Z] Copying: 524/1024 [MB] (19 MBps) [2024-12-05T03:11:23.209Z] Copying: 545/1024 [MB] (21 MBps) [2024-12-05T03:11:24.145Z] Copying: 570/1024 [MB] (25 MBps) [2024-12-05T03:11:25.519Z] Copying: 588/1024 [MB] (17 MBps) [2024-12-05T03:11:26.451Z] Copying: 610/1024 [MB] (22 MBps) [2024-12-05T03:11:27.384Z] Copying: 632/1024 [MB] (21 MBps) [2024-12-05T03:11:28.317Z] Copying: 659/1024 [MB] (27 MBps) [2024-12-05T03:11:29.248Z] Copying: 689/1024 [MB] (30 MBps) [2024-12-05T03:11:30.184Z] Copying: 712/1024 [MB] (22 MBps) [2024-12-05T03:11:31.118Z] Copying: 732/1024 [MB] (20 MBps) [2024-12-05T03:11:32.486Z] Copying: 762/1024 [MB] (29 MBps) [2024-12-05T03:11:33.417Z] Copying: 782/1024 [MB] (20 MBps) [2024-12-05T03:11:34.368Z] Copying: 801/1024 [MB] (19 MBps) [2024-12-05T03:11:35.369Z] Copying: 828/1024 [MB] (26 MBps) [2024-12-05T03:11:36.301Z] Copying: 852/1024 [MB] (24 MBps) [2024-12-05T03:11:37.236Z] Copying: 878/1024 [MB] (25 MBps) [2024-12-05T03:11:38.171Z] Copying: 903/1024 [MB] (25 MBps) [2024-12-05T03:11:39.105Z] Copying: 929/1024 [MB] (26 MBps) [2024-12-05T03:11:40.495Z] Copying: 962/1024 [MB] (32 MBps) [2024-12-05T03:11:41.429Z] Copying: 984/1024 [MB] (21 MBps) [2024-12-05T03:11:41.995Z] Copying: 1006/1024 [MB] (21 MBps) [2024-12-05T03:11:42.562Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:27:11.718 00:27:11.718 03:11:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:11.718 03:11:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:11.976 03:11:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:12.238 [2024-12-05 03:11:42.860963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.861004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:12.238 [2024-12-05 03:11:42.861014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:12.238 [2024-12-05 03:11:42.861021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.861041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:12.238 [2024-12-05 03:11:42.863053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.863084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:12.238 [2024-12-05 03:11:42.863095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:27:12.238 [2024-12-05 03:11:42.863101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.865182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.865209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:12.238 [2024-12-05 03:11:42.865219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:27:12.238 [2024-12-05 03:11:42.865225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.879352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.879380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:12.238 [2024-12-05 03:11:42.879391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.109 ms 00:27:12.238 [2024-12-05 03:11:42.879396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.884160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.884183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:12.238 [2024-12-05 03:11:42.884193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:27:12.238 [2024-12-05 03:11:42.884200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.902227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.902254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:12.238 [2024-12-05 03:11:42.902264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.973 ms 00:27:12.238 [2024-12-05 03:11:42.902271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.914653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.914682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:12.238 [2024-12-05 03:11:42.914695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.351 ms 00:27:12.238 [2024-12-05 03:11:42.914700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.914804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.914812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:12.238 [2024-12-05 03:11:42.914819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:12.238 [2024-12-05 03:11:42.914825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.932546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.932572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:12.238 [2024-12-05 03:11:42.932581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.706 ms 00:27:12.238 [2024-12-05 03:11:42.932587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.950099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.950125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:12.238 [2024-12-05 03:11:42.950134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.482 ms 00:27:12.238 [2024-12-05 03:11:42.950139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.967199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.967227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:12.238 [2024-12-05 03:11:42.967236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.030 ms 00:27:12.238 [2024-12-05 03:11:42.967241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.984528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.238 [2024-12-05 03:11:42.984554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:12.238 [2024-12-05 03:11:42.984563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.233 ms 00:27:12.238 [2024-12-05 03:11:42.984569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.238 [2024-12-05 03:11:42.984595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:12.238 [2024-12-05 03:11:42.984605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:12.238 [2024-12-05 03:11:42.984839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.984996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:12.239 [2024-12-05 03:11:42.985281] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:12.239 [2024-12-05 03:11:42.985288] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b01abfe-7442-4e7e-85e3-29dd0e69b26c 00:27:12.239 [2024-12-05 03:11:42.985294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:12.239 [2024-12-05 03:11:42.985302] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:12.239 [2024-12-05 03:11:42.985309] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:12.239 [2024-12-05 03:11:42.985316] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:12.239 [2024-12-05 03:11:42.985321] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:12.239 [2024-12-05 03:11:42.985329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:12.239 [2024-12-05 03:11:42.985334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:12.239 [2024-12-05 03:11:42.985340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:12.239 [2024-12-05 03:11:42.985345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:12.239 [2024-12-05 03:11:42.985351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.239 [2024-12-05 03:11:42.985357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:12.239 [2024-12-05 03:11:42.985364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:27:12.239 [2024-12-05 03:11:42.985370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.239 [2024-12-05 03:11:42.994943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.239 [2024-12-05 03:11:42.994970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:12.239 [2024-12-05 03:11:42.994979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.550 ms 00:27:12.239 [2024-12-05 03:11:42.994985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.239 [2024-12-05 03:11:42.995260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.239 [2024-12-05 03:11:42.995272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:12.239 [2024-12-05 03:11:42.995281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:27:12.239 [2024-12-05 03:11:42.995287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.239 [2024-12-05 03:11:43.028268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.239 [2024-12-05 03:11:43.028296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.239 [2024-12-05 03:11:43.028305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.239 [2024-12-05 03:11:43.028312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.239 [2024-12-05 03:11:43.028354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.239 [2024-12-05 03:11:43.028360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.239 [2024-12-05 03:11:43.028367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.239 [2024-12-05 03:11:43.028373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.239 [2024-12-05 03:11:43.028448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.239 [2024-12-05 03:11:43.028458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.239 [2024-12-05 03:11:43.028465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.239 [2024-12-05 03:11:43.028471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.239 [2024-12-05 03:11:43.028491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.239 [2024-12-05 03:11:43.028498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.239 [2024-12-05 03:11:43.028505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.239 [2024-12-05 03:11:43.028510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.088050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.088091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.500 [2024-12-05 03:11:43.088100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.088106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.500 [2024-12-05 03:11:43.136587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:12.500 [2024-12-05 03:11:43.136670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:12.500 [2024-12-05 03:11:43.136738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:12.500 [2024-12-05 03:11:43.136831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:12.500 [2024-12-05 03:11:43.136878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:12.500 [2024-12-05 03:11:43.136928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.136972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.500 [2024-12-05 03:11:43.136980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:12.500 [2024-12-05 03:11:43.136987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.500 [2024-12-05 03:11:43.136992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.500 [2024-12-05 03:11:43.137118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.102 ms, result 0 00:27:12.500 true 00:27:12.500 03:11:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80475 00:27:12.500 03:11:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80475 00:27:12.501 03:11:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:12.501 [2024-12-05 03:11:43.230508] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:12.501 [2024-12-05 03:11:43.230626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81238 ] 00:27:12.761 [2024-12-05 03:11:43.386869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.761 [2024-12-05 03:11:43.461545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.146  [2024-12-05T03:11:45.945Z] Copying: 256/1024 [MB] (256 MBps) [2024-12-05T03:11:46.883Z] Copying: 514/1024 [MB] (258 MBps) [2024-12-05T03:11:47.826Z] Copying: 769/1024 [MB] (254 MBps) [2024-12-05T03:11:47.826Z] Copying: 1022/1024 [MB] (253 MBps) [2024-12-05T03:11:48.397Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:27:17.553 00:27:17.553 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80475 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:17.553 03:11:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:17.553 [2024-12-05 03:11:48.284637] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:17.553 [2024-12-05 03:11:48.284752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81295 ] 00:27:17.814 [2024-12-05 03:11:48.442533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.814 [2024-12-05 03:11:48.518929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.076 [2024-12-05 03:11:48.727959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.076 [2024-12-05 03:11:48.728009] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.076 [2024-12-05 03:11:48.791640] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:18.076 [2024-12-05 03:11:48.792218] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:18.076 [2024-12-05 03:11:48.793122] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:18.339 [2024-12-05 03:11:49.142583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.142636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:18.339 [2024-12-05 03:11:49.142652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:18.339 [2024-12-05 03:11:49.142663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.142715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.142725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.339 [2024-12-05 03:11:49.142734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:18.339 [2024-12-05 03:11:49.142742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.142763] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:18.339 [2024-12-05 03:11:49.143509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:18.339 [2024-12-05 03:11:49.143529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.143539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.339 [2024-12-05 03:11:49.143548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:27:18.339 [2024-12-05 03:11:49.143556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.145405] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:18.339 [2024-12-05 03:11:49.160039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.160091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:18.339 [2024-12-05 03:11:49.160104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.636 ms 00:27:18.339 [2024-12-05 03:11:49.160113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.160205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.160216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:18.339 [2024-12-05 03:11:49.160226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:18.339 [2024-12-05 03:11:49.160234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.168891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.168931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.339 [2024-12-05 03:11:49.168942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.578 ms 00:27:18.339 [2024-12-05 03:11:49.168950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.169038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.169048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.339 [2024-12-05 03:11:49.169058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:18.339 [2024-12-05 03:11:49.169065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.169143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.169154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:18.339 [2024-12-05 03:11:49.169162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:18.339 [2024-12-05 03:11:49.169170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.169193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:18.339 [2024-12-05 03:11:49.173284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.173317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.339 [2024-12-05 03:11:49.173328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:27:18.339 [2024-12-05 03:11:49.173336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.173374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.173383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:18.339 [2024-12-05 03:11:49.173391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:18.339 [2024-12-05 03:11:49.173399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.173458] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:18.339 [2024-12-05 03:11:49.173484] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:18.339 [2024-12-05 03:11:49.173521] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:18.339 [2024-12-05 03:11:49.173537] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:18.339 [2024-12-05 03:11:49.173645] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:18.339 [2024-12-05 03:11:49.173657] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:18.339 [2024-12-05 03:11:49.173668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:18.339 [2024-12-05 03:11:49.173681] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:18.339 [2024-12-05 03:11:49.173690] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:18.339 [2024-12-05 03:11:49.173698] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:18.339 [2024-12-05 03:11:49.173706] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:18.339 [2024-12-05 03:11:49.173713] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:18.339 [2024-12-05 03:11:49.173722] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:18.339 [2024-12-05 03:11:49.173730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.173738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:18.339 [2024-12-05 03:11:49.173745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:27:18.339 [2024-12-05 03:11:49.173752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.173835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.339 [2024-12-05 03:11:49.173847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:18.339 [2024-12-05 03:11:49.173855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:18.339 [2024-12-05 03:11:49.173862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.339 [2024-12-05 03:11:49.173969] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:18.339 [2024-12-05 03:11:49.173980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:18.339 [2024-12-05 03:11:49.173988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.339 [2024-12-05 03:11:49.173996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.339 [2024-12-05 03:11:49.174005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:18.339 [2024-12-05 03:11:49.174011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:18.339 [2024-12-05 03:11:49.174019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:18.339 [2024-12-05 03:11:49.174027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:18.339 [2024-12-05 03:11:49.174034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:18.339 [2024-12-05 03:11:49.174047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.339 [2024-12-05 03:11:49.174054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:18.339 [2024-12-05 03:11:49.174061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:18.339 [2024-12-05 03:11:49.174084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.339 [2024-12-05 03:11:49.174092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:18.339 [2024-12-05 03:11:49.174102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:18.339 [2024-12-05 03:11:49.174109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.339 [2024-12-05 03:11:49.174116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:18.339 [2024-12-05 03:11:49.174123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:18.339 [2024-12-05 03:11:49.174130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:18.340 [2024-12-05 03:11:49.174145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.340 [2024-12-05 03:11:49.174158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:18.340 [2024-12-05 03:11:49.174166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.340 [2024-12-05 03:11:49.174179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:18.340 [2024-12-05 03:11:49.174186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.340 [2024-12-05 03:11:49.174199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:18.340 [2024-12-05 03:11:49.174206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.340 [2024-12-05 03:11:49.174219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:18.340 [2024-12-05 03:11:49.174225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.340 [2024-12-05 03:11:49.174238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:18.340 [2024-12-05 03:11:49.174245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:18.340 [2024-12-05 03:11:49.174251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.340 [2024-12-05 03:11:49.174258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:18.340 [2024-12-05 03:11:49.174265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:18.340 [2024-12-05 03:11:49.174271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:18.340 [2024-12-05 03:11:49.174285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:18.340 [2024-12-05 03:11:49.174292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174298] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:18.340 [2024-12-05 03:11:49.174307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:18.340 [2024-12-05 03:11:49.174317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.340 [2024-12-05 03:11:49.174326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.340 [2024-12-05 03:11:49.174335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:18.340 [2024-12-05 03:11:49.174341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:18.340 [2024-12-05 03:11:49.174348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:18.340 [2024-12-05 03:11:49.174355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:18.340 [2024-12-05 03:11:49.174361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:18.340 [2024-12-05 03:11:49.174368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:18.340 [2024-12-05 03:11:49.174377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:18.340 [2024-12-05 03:11:49.174387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:18.340 [2024-12-05 03:11:49.174402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:18.340 [2024-12-05 03:11:49.174409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:18.340 [2024-12-05 03:11:49.174416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:18.340 [2024-12-05 03:11:49.174423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:18.340 [2024-12-05 03:11:49.174431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:18.340 [2024-12-05 03:11:49.174438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:18.340 [2024-12-05 03:11:49.174445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:18.340 [2024-12-05 03:11:49.174452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:18.340 [2024-12-05 03:11:49.174459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:18.340 [2024-12-05 03:11:49.174494] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:18.340 [2024-12-05 03:11:49.174502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.340 [2024-12-05 03:11:49.174519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:18.340 [2024-12-05 03:11:49.174526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:18.340 [2024-12-05 03:11:49.174533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:18.340 [2024-12-05 03:11:49.174541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.340 [2024-12-05 03:11:49.174548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:18.340 [2024-12-05 03:11:49.174555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:27:18.340 [2024-12-05 03:11:49.174564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.601 [2024-12-05 03:11:49.207241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.601 [2024-12-05 03:11:49.207287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.601 [2024-12-05 03:11:49.207299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.628 ms 00:27:18.601 [2024-12-05 03:11:49.207308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.601 [2024-12-05 03:11:49.207404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.601 [2024-12-05 03:11:49.207415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:18.601 [2024-12-05 03:11:49.207423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:18.601 [2024-12-05 03:11:49.207431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.601 [2024-12-05 03:11:49.255309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.601 [2024-12-05 03:11:49.255366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.601 [2024-12-05 03:11:49.255383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.815 ms 00:27:18.601 [2024-12-05 03:11:49.255392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.601 [2024-12-05 03:11:49.255449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.601 [2024-12-05 03:11:49.255461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.601 [2024-12-05 03:11:49.255470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.601 [2024-12-05 03:11:49.255479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.601 [2024-12-05 03:11:49.256122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.601 [2024-12-05 03:11:49.256158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.601 [2024-12-05 03:11:49.256169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:27:18.602 [2024-12-05 03:11:49.256188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.256348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.256358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.602 [2024-12-05 03:11:49.256367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:27:18.602 [2024-12-05 03:11:49.256374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.272278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.272320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.602 [2024-12-05 03:11:49.272331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:27:18.602 [2024-12-05 03:11:49.272340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.286781] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:18.602 [2024-12-05 03:11:49.286825] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:18.602 [2024-12-05 03:11:49.286839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.286849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:18.602 [2024-12-05 03:11:49.286858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.377 ms 00:27:18.602 [2024-12-05 03:11:49.286866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.313536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.313584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:18.602 [2024-12-05 03:11:49.313597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.611 ms 00:27:18.602 [2024-12-05 03:11:49.313605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.326894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.326937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:18.602 [2024-12-05 03:11:49.326949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.256 ms 00:27:18.602 [2024-12-05 03:11:49.326958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.339438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.339481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:18.602 [2024-12-05 03:11:49.339493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.430 ms 00:27:18.602 [2024-12-05 03:11:49.339501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.340209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.340245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:18.602 [2024-12-05 03:11:49.340255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:27:18.602 [2024-12-05 03:11:49.340264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.407553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.407610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:18.602 [2024-12-05 03:11:49.407626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.266 ms 00:27:18.602 [2024-12-05 03:11:49.407636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.419631] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:18.602 [2024-12-05 03:11:49.423057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.423111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:18.602 [2024-12-05 03:11:49.423124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.356 ms 00:27:18.602 [2024-12-05 03:11:49.423139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.423250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.423263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:18.602 [2024-12-05 03:11:49.423272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:18.602 [2024-12-05 03:11:49.423281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.423357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.423368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:18.602 [2024-12-05 03:11:49.423377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:18.602 [2024-12-05 03:11:49.423385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.423411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.423420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:18.602 [2024-12-05 03:11:49.423429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:18.602 [2024-12-05 03:11:49.423437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.602 [2024-12-05 03:11:49.423474] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:18.602 [2024-12-05 03:11:49.423485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.602 [2024-12-05 03:11:49.423494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:18.602 [2024-12-05 03:11:49.423502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:18.602 [2024-12-05 03:11:49.423514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.864 [2024-12-05 03:11:49.450741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.864 [2024-12-05 03:11:49.450779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:18.864 [2024-12-05 03:11:49.450792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.208 ms 00:27:18.864 [2024-12-05 03:11:49.450801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.864 [2024-12-05 03:11:49.450888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.864 [2024-12-05 03:11:49.450898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:18.864 [2024-12-05 03:11:49.450908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:18.864 [2024-12-05 03:11:49.450916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.864 [2024-12-05 03:11:49.452148] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.041 ms, result 0 00:27:19.809  [2024-12-05T03:11:51.597Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-05T03:11:52.543Z] Copying: 28/1024 [MB] (13 MBps) [2024-12-05T03:11:53.487Z] Copying: 45/1024 [MB] (16 MBps) [2024-12-05T03:11:54.875Z] Copying: 59/1024 [MB] (14 MBps) [2024-12-05T03:11:55.816Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-05T03:11:56.760Z] Copying: 87/1024 [MB] (13 MBps) [2024-12-05T03:11:57.704Z] Copying: 117/1024 [MB] (30 MBps) [2024-12-05T03:11:58.650Z] Copying: 145/1024 [MB] (28 MBps) [2024-12-05T03:11:59.594Z] Copying: 156/1024 [MB] (10 MBps) [2024-12-05T03:12:00.540Z] Copying: 180/1024 [MB] (24 MBps) [2024-12-05T03:12:01.485Z] Copying: 204/1024 [MB] (23 MBps) [2024-12-05T03:12:02.869Z] Copying: 230/1024 [MB] (26 MBps) [2024-12-05T03:12:03.810Z] Copying: 248/1024 [MB] (18 MBps) [2024-12-05T03:12:04.752Z] Copying: 267/1024 [MB] (19 MBps) [2024-12-05T03:12:05.698Z] Copying: 287/1024 [MB] (19 MBps) [2024-12-05T03:12:06.659Z] Copying: 304/1024 [MB] (16 MBps) [2024-12-05T03:12:07.639Z] Copying: 321/1024 [MB] (16 MBps) [2024-12-05T03:12:08.584Z] Copying: 339/1024 [MB] (17 MBps) [2024-12-05T03:12:09.527Z] Copying: 359/1024 [MB] (20 MBps) [2024-12-05T03:12:10.471Z] Copying: 387/1024 [MB] (28 MBps) [2024-12-05T03:12:11.859Z] Copying: 419/1024 [MB] (32 MBps) [2024-12-05T03:12:12.805Z] Copying: 448/1024 [MB] (29 MBps) [2024-12-05T03:12:13.749Z] Copying: 472/1024 [MB] (23 MBps) [2024-12-05T03:12:14.692Z] Copying: 488/1024 [MB] (15 MBps) [2024-12-05T03:12:15.655Z] Copying: 505/1024 [MB] (17 MBps) [2024-12-05T03:12:16.599Z] Copying: 522/1024 [MB] (17 MBps) [2024-12-05T03:12:17.545Z] Copying: 536/1024 [MB] (13 MBps) [2024-12-05T03:12:18.490Z] Copying: 546/1024 [MB] (10 MBps) [2024-12-05T03:12:19.880Z] Copying: 556/1024 [MB] (10 MBps) [2024-12-05T03:12:20.826Z] Copying: 576/1024 [MB] (19 MBps) [2024-12-05T03:12:21.769Z] Copying: 590/1024 [MB] (14 MBps) [2024-12-05T03:12:22.714Z] Copying: 612/1024 [MB] (22 MBps) [2024-12-05T03:12:23.658Z] Copying: 628/1024 [MB] (16 MBps) [2024-12-05T03:12:24.603Z] Copying: 647/1024 [MB] (18 MBps) [2024-12-05T03:12:25.547Z] Copying: 666/1024 [MB] (19 MBps) [2024-12-05T03:12:26.490Z] Copying: 683/1024 [MB] (16 MBps) [2024-12-05T03:12:27.880Z] Copying: 699/1024 [MB] (16 MBps) [2024-12-05T03:12:28.824Z] Copying: 710/1024 [MB] (10 MBps) [2024-12-05T03:12:29.770Z] Copying: 725/1024 [MB] (15 MBps) [2024-12-05T03:12:30.716Z] Copying: 742/1024 [MB] (17 MBps) [2024-12-05T03:12:31.663Z] Copying: 761/1024 [MB] (18 MBps) [2024-12-05T03:12:32.609Z] Copying: 772/1024 [MB] (11 MBps) [2024-12-05T03:12:33.555Z] Copying: 783/1024 [MB] (10 MBps) [2024-12-05T03:12:34.500Z] Copying: 802/1024 [MB] (19 MBps) [2024-12-05T03:12:35.889Z] Copying: 821/1024 [MB] (18 MBps) [2024-12-05T03:12:36.833Z] Copying: 841/1024 [MB] (19 MBps) [2024-12-05T03:12:37.934Z] Copying: 853/1024 [MB] (11 MBps) [2024-12-05T03:12:38.505Z] Copying: 883680/1048576 [kB] (10200 kBps) [2024-12-05T03:12:39.891Z] Copying: 873/1024 [MB] (10 MBps) [2024-12-05T03:12:40.834Z] Copying: 884/1024 [MB] (11 MBps) [2024-12-05T03:12:41.778Z] Copying: 900/1024 [MB] (16 MBps) [2024-12-05T03:12:42.722Z] Copying: 917/1024 [MB] (16 MBps) [2024-12-05T03:12:43.668Z] Copying: 927/1024 [MB] (10 MBps) [2024-12-05T03:12:44.612Z] Copying: 960208/1048576 [kB] (10180 kBps) [2024-12-05T03:12:45.556Z] Copying: 948/1024 [MB] (10 MBps) [2024-12-05T03:12:46.499Z] Copying: 959/1024 [MB] (10 MBps) [2024-12-05T03:12:47.886Z] Copying: 970/1024 [MB] (11 MBps) [2024-12-05T03:12:48.830Z] Copying: 983/1024 [MB] (12 MBps) [2024-12-05T03:12:49.776Z] Copying: 997/1024 [MB] (14 MBps) [2024-12-05T03:12:50.720Z] Copying: 1012/1024 [MB] (14 MBps) [2024-12-05T03:12:50.981Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-05T03:12:50.981Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 03:12:50.917758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.137 [2024-12-05 03:12:50.917852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:20.137 [2024-12-05 03:12:50.917877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:20.137 [2024-12-05 03:12:50.917892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.137 [2024-12-05 03:12:50.921793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:20.137 [2024-12-05 03:12:50.927678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.137 [2024-12-05 03:12:50.927730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:20.137 [2024-12-05 03:12:50.927744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.833 ms 00:28:20.137 [2024-12-05 03:12:50.927760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.137 [2024-12-05 03:12:50.938585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.137 [2024-12-05 03:12:50.938636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:20.137 [2024-12-05 03:12:50.938650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.660 ms 00:28:20.137 [2024-12-05 03:12:50.938660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.137 [2024-12-05 03:12:50.961819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.137 [2024-12-05 03:12:50.961876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:20.137 [2024-12-05 03:12:50.961890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.142 ms 00:28:20.137 [2024-12-05 03:12:50.961899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.137 [2024-12-05 03:12:50.968650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.137 [2024-12-05 03:12:50.968692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:20.137 [2024-12-05 03:12:50.968705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.706 ms 00:28:20.137 [2024-12-05 03:12:50.968714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.399 [2024-12-05 03:12:50.994984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.399 [2024-12-05 03:12:50.995037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:20.399 [2024-12-05 03:12:50.995050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.210 ms 00:28:20.399 [2024-12-05 03:12:50.995058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.399 [2024-12-05 03:12:51.011816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.399 [2024-12-05 03:12:51.011866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:20.399 [2024-12-05 03:12:51.011881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:28:20.399 [2024-12-05 03:12:51.011890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.663 [2024-12-05 03:12:51.279849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.663 [2024-12-05 03:12:51.279900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:20.663 [2024-12-05 03:12:51.279920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 267.908 ms 00:28:20.663 [2024-12-05 03:12:51.279928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.663 [2024-12-05 03:12:51.305625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.663 [2024-12-05 03:12:51.305673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:20.663 [2024-12-05 03:12:51.305685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.681 ms 00:28:20.663 [2024-12-05 03:12:51.305704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.663 [2024-12-05 03:12:51.330716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.663 [2024-12-05 03:12:51.330765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:20.663 [2024-12-05 03:12:51.330777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.965 ms 00:28:20.663 [2024-12-05 03:12:51.330785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.663 [2024-12-05 03:12:51.355614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.663 [2024-12-05 03:12:51.355662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:20.663 [2024-12-05 03:12:51.355675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.784 ms 00:28:20.663 [2024-12-05 03:12:51.355682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.663 [2024-12-05 03:12:51.380419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.663 [2024-12-05 03:12:51.380467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:20.663 [2024-12-05 03:12:51.380479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.661 ms 00:28:20.663 [2024-12-05 03:12:51.380486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.663 [2024-12-05 03:12:51.380529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:20.663 [2024-12-05 03:12:51.380544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 96512 / 261120 wr_cnt: 1 state: open 00:28:20.663 [2024-12-05 03:12:51.380554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:20.663 [2024-12-05 03:12:51.380997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:20.664 [2024-12-05 03:12:51.381381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:20.664 [2024-12-05 03:12:51.381390] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b01abfe-7442-4e7e-85e3-29dd0e69b26c 00:28:20.664 [2024-12-05 03:12:51.381411] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 96512 00:28:20.664 [2024-12-05 03:12:51.381420] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97472 00:28:20.664 [2024-12-05 03:12:51.381427] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 96512 00:28:20.664 [2024-12-05 03:12:51.381437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:28:20.664 [2024-12-05 03:12:51.381444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:20.664 [2024-12-05 03:12:51.381453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:20.664 [2024-12-05 03:12:51.381469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:20.664 [2024-12-05 03:12:51.381476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:20.664 [2024-12-05 03:12:51.381483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:20.664 [2024-12-05 03:12:51.381490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.664 [2024-12-05 03:12:51.381498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:20.664 [2024-12-05 03:12:51.381507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:28:20.664 [2024-12-05 03:12:51.381514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.395044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.664 [2024-12-05 03:12:51.395103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:20.664 [2024-12-05 03:12:51.395115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.483 ms 00:28:20.664 [2024-12-05 03:12:51.395123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.395511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.664 [2024-12-05 03:12:51.395522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:20.664 [2024-12-05 03:12:51.395540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:28:20.664 [2024-12-05 03:12:51.395548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.431995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.664 [2024-12-05 03:12:51.432048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:20.664 [2024-12-05 03:12:51.432060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.664 [2024-12-05 03:12:51.432081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.432146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.664 [2024-12-05 03:12:51.432156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:20.664 [2024-12-05 03:12:51.432173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.664 [2024-12-05 03:12:51.432182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.432249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.664 [2024-12-05 03:12:51.432262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:20.664 [2024-12-05 03:12:51.432272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.664 [2024-12-05 03:12:51.432281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.432297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.664 [2024-12-05 03:12:51.432306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:20.664 [2024-12-05 03:12:51.432316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.664 [2024-12-05 03:12:51.432325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.664 [2024-12-05 03:12:51.503273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.664 [2024-12-05 03:12:51.503322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:20.664 [2024-12-05 03:12:51.503332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.664 [2024-12-05 03:12:51.503340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.552638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.552674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:20.926 [2024-12-05 03:12:51.552684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.552694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.552750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.552757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.926 [2024-12-05 03:12:51.552764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.552770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.552797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.552804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.926 [2024-12-05 03:12:51.552811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.552818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.552890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.552898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.926 [2024-12-05 03:12:51.552905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.552910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.552934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.552941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:20.926 [2024-12-05 03:12:51.552948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.552953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.552983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.552990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.926 [2024-12-05 03:12:51.552997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.553003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.553035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.926 [2024-12-05 03:12:51.553042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.926 [2024-12-05 03:12:51.553048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.926 [2024-12-05 03:12:51.553055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.926 [2024-12-05 03:12:51.553168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 636.758 ms, result 0 00:28:21.870 00:28:21.870 00:28:21.870 03:12:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:24.419 03:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:24.419 [2024-12-05 03:12:54.711961] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:24.419 [2024-12-05 03:12:54.712045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81967 ] 00:28:24.419 [2024-12-05 03:12:54.863908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.419 [2024-12-05 03:12:54.938951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.419 [2024-12-05 03:12:55.148251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.419 [2024-12-05 03:12:55.148303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.687 [2024-12-05 03:12:55.302673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.302712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:24.687 [2024-12-05 03:12:55.302722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:24.687 [2024-12-05 03:12:55.302728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.302764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.302774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.687 [2024-12-05 03:12:55.302780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:24.687 [2024-12-05 03:12:55.302786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.302798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:24.687 [2024-12-05 03:12:55.303330] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:24.687 [2024-12-05 03:12:55.303348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.303354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.687 [2024-12-05 03:12:55.303360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:28:24.687 [2024-12-05 03:12:55.303366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.304353] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:24.687 [2024-12-05 03:12:55.314172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.314200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:24.687 [2024-12-05 03:12:55.314209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.819 ms 00:28:24.687 [2024-12-05 03:12:55.314215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.314261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.314269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:24.687 [2024-12-05 03:12:55.314275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:24.687 [2024-12-05 03:12:55.314280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.318606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.318630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.687 [2024-12-05 03:12:55.318641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:28:24.687 [2024-12-05 03:12:55.318647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.318700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.318707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.687 [2024-12-05 03:12:55.318713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:24.687 [2024-12-05 03:12:55.318718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.318751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.687 [2024-12-05 03:12:55.318758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:24.687 [2024-12-05 03:12:55.318764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:24.687 [2024-12-05 03:12:55.318771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.687 [2024-12-05 03:12:55.318787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:24.687 [2024-12-05 03:12:55.321534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.688 [2024-12-05 03:12:55.321559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.688 [2024-12-05 03:12:55.321567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:28:24.688 [2024-12-05 03:12:55.321572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.688 [2024-12-05 03:12:55.321597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.688 [2024-12-05 03:12:55.321604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:24.688 [2024-12-05 03:12:55.321610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:24.688 [2024-12-05 03:12:55.321616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.688 [2024-12-05 03:12:55.321630] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:24.688 [2024-12-05 03:12:55.321645] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:24.688 [2024-12-05 03:12:55.321673] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:24.688 [2024-12-05 03:12:55.321684] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:24.688 [2024-12-05 03:12:55.321762] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:24.688 [2024-12-05 03:12:55.321770] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:24.688 [2024-12-05 03:12:55.321778] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:24.688 [2024-12-05 03:12:55.321785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:24.688 [2024-12-05 03:12:55.321792] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:24.688 [2024-12-05 03:12:55.321798] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:24.688 [2024-12-05 03:12:55.321804] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:24.688 [2024-12-05 03:12:55.321811] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:24.688 [2024-12-05 03:12:55.321817] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:24.688 [2024-12-05 03:12:55.321823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.688 [2024-12-05 03:12:55.321828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:24.688 [2024-12-05 03:12:55.321834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:28:24.688 [2024-12-05 03:12:55.321840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.688 [2024-12-05 03:12:55.321902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.688 [2024-12-05 03:12:55.321914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:24.688 [2024-12-05 03:12:55.321920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:24.688 [2024-12-05 03:12:55.321926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.688 [2024-12-05 03:12:55.322003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:24.688 [2024-12-05 03:12:55.322011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:24.688 [2024-12-05 03:12:55.322018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:24.688 [2024-12-05 03:12:55.322035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:24.688 [2024-12-05 03:12:55.322049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.688 [2024-12-05 03:12:55.322060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:24.688 [2024-12-05 03:12:55.322064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:24.688 [2024-12-05 03:12:55.322080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.688 [2024-12-05 03:12:55.322092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:24.688 [2024-12-05 03:12:55.322098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:24.688 [2024-12-05 03:12:55.322103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:24.688 [2024-12-05 03:12:55.322114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:24.688 [2024-12-05 03:12:55.322129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:24.688 [2024-12-05 03:12:55.322144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:24.688 [2024-12-05 03:12:55.322159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:24.688 [2024-12-05 03:12:55.322173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:24.688 [2024-12-05 03:12:55.322188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.688 [2024-12-05 03:12:55.322198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:24.688 [2024-12-05 03:12:55.322203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:24.688 [2024-12-05 03:12:55.322208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.688 [2024-12-05 03:12:55.322213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:24.688 [2024-12-05 03:12:55.322217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:24.688 [2024-12-05 03:12:55.322222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:24.688 [2024-12-05 03:12:55.322231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:24.688 [2024-12-05 03:12:55.322236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322241] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:24.688 [2024-12-05 03:12:55.322247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:24.688 [2024-12-05 03:12:55.322252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.688 [2024-12-05 03:12:55.322263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:24.688 [2024-12-05 03:12:55.322268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:24.688 [2024-12-05 03:12:55.322273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:24.688 [2024-12-05 03:12:55.322278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:24.688 [2024-12-05 03:12:55.322283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:24.688 [2024-12-05 03:12:55.322287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:24.688 [2024-12-05 03:12:55.322293] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:24.688 [2024-12-05 03:12:55.322303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.688 [2024-12-05 03:12:55.322309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:24.688 [2024-12-05 03:12:55.322315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:24.688 [2024-12-05 03:12:55.322320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:24.688 [2024-12-05 03:12:55.322325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:24.688 [2024-12-05 03:12:55.322331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:24.688 [2024-12-05 03:12:55.322336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:24.688 [2024-12-05 03:12:55.322341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:24.688 [2024-12-05 03:12:55.322347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:24.688 [2024-12-05 03:12:55.322352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:24.688 [2024-12-05 03:12:55.322357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:24.688 [2024-12-05 03:12:55.322362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:24.688 [2024-12-05 03:12:55.322367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:24.688 [2024-12-05 03:12:55.322373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:24.688 [2024-12-05 03:12:55.322378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:24.688 [2024-12-05 03:12:55.322383] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:24.688 [2024-12-05 03:12:55.322389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.689 [2024-12-05 03:12:55.322395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:24.689 [2024-12-05 03:12:55.322400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:24.689 [2024-12-05 03:12:55.322405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:24.689 [2024-12-05 03:12:55.322411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:24.689 [2024-12-05 03:12:55.322417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.322422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:24.689 [2024-12-05 03:12:55.322428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:28:24.689 [2024-12-05 03:12:55.322433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.343038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.343065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.689 [2024-12-05 03:12:55.343084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.570 ms 00:28:24.689 [2024-12-05 03:12:55.343090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.343151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.343157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:24.689 [2024-12-05 03:12:55.343163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:24.689 [2024-12-05 03:12:55.343171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.379800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.379832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.689 [2024-12-05 03:12:55.379842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.593 ms 00:28:24.689 [2024-12-05 03:12:55.379848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.379877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.379887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.689 [2024-12-05 03:12:55.379894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:24.689 [2024-12-05 03:12:55.379900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.380217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.380229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.689 [2024-12-05 03:12:55.380236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:24.689 [2024-12-05 03:12:55.380242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.380337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.380352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.689 [2024-12-05 03:12:55.380362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:28:24.689 [2024-12-05 03:12:55.380368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.390754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.390782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.689 [2024-12-05 03:12:55.390789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.370 ms 00:28:24.689 [2024-12-05 03:12:55.390795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.400502] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:24.689 [2024-12-05 03:12:55.400530] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:24.689 [2024-12-05 03:12:55.400538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.400544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:24.689 [2024-12-05 03:12:55.400550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.674 ms 00:28:24.689 [2024-12-05 03:12:55.400555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.418905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.418934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:24.689 [2024-12-05 03:12:55.418941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.320 ms 00:28:24.689 [2024-12-05 03:12:55.418948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.427950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.427977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:24.689 [2024-12-05 03:12:55.427984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.975 ms 00:28:24.689 [2024-12-05 03:12:55.427989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.436673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.436706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:24.689 [2024-12-05 03:12:55.436713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.659 ms 00:28:24.689 [2024-12-05 03:12:55.436719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.437177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.437210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:24.689 [2024-12-05 03:12:55.437217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:28:24.689 [2024-12-05 03:12:55.437223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.481332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.481371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:24.689 [2024-12-05 03:12:55.481381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.096 ms 00:28:24.689 [2024-12-05 03:12:55.481388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.489322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:24.689 [2024-12-05 03:12:55.490988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.491012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:24.689 [2024-12-05 03:12:55.491019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.468 ms 00:28:24.689 [2024-12-05 03:12:55.491025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.491084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.491094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:24.689 [2024-12-05 03:12:55.491102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:24.689 [2024-12-05 03:12:55.491108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.492063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.492100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:24.689 [2024-12-05 03:12:55.492108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:28:24.689 [2024-12-05 03:12:55.492114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.492131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.492137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:24.689 [2024-12-05 03:12:55.492143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:24.689 [2024-12-05 03:12:55.492152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.492187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:24.689 [2024-12-05 03:12:55.492195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.492201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:24.689 [2024-12-05 03:12:55.492207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:24.689 [2024-12-05 03:12:55.492212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.509553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.509580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:24.689 [2024-12-05 03:12:55.509592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:28:24.689 [2024-12-05 03:12:55.509598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.509648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.689 [2024-12-05 03:12:55.509655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:24.689 [2024-12-05 03:12:55.509661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:24.689 [2024-12-05 03:12:55.509667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.689 [2024-12-05 03:12:55.510390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.385 ms, result 0 00:28:26.074  [2024-12-05T03:12:57.877Z] Copying: 1172/1048576 [kB] (1172 kBps) [2024-12-05T03:12:58.823Z] Copying: 4592/1048576 [kB] (3420 kBps) [2024-12-05T03:12:59.767Z] Copying: 22/1024 [MB] (17 MBps) [2024-12-05T03:13:00.712Z] Copying: 59/1024 [MB] (37 MBps) [2024-12-05T03:13:01.655Z] Copying: 96/1024 [MB] (37 MBps) [2024-12-05T03:13:03.036Z] Copying: 129/1024 [MB] (32 MBps) [2024-12-05T03:13:03.976Z] Copying: 157/1024 [MB] (28 MBps) [2024-12-05T03:13:04.918Z] Copying: 190/1024 [MB] (32 MBps) [2024-12-05T03:13:05.859Z] Copying: 217/1024 [MB] (26 MBps) [2024-12-05T03:13:06.803Z] Copying: 246/1024 [MB] (29 MBps) [2024-12-05T03:13:07.749Z] Copying: 266/1024 [MB] (20 MBps) [2024-12-05T03:13:08.694Z] Copying: 311/1024 [MB] (44 MBps) [2024-12-05T03:13:09.694Z] Copying: 349/1024 [MB] (38 MBps) [2024-12-05T03:13:11.081Z] Copying: 370/1024 [MB] (20 MBps) [2024-12-05T03:13:11.654Z] Copying: 394/1024 [MB] (24 MBps) [2024-12-05T03:13:13.042Z] Copying: 428/1024 [MB] (34 MBps) [2024-12-05T03:13:13.986Z] Copying: 457/1024 [MB] (29 MBps) [2024-12-05T03:13:14.930Z] Copying: 492/1024 [MB] (34 MBps) [2024-12-05T03:13:15.870Z] Copying: 523/1024 [MB] (31 MBps) [2024-12-05T03:13:16.813Z] Copying: 569/1024 [MB] (45 MBps) [2024-12-05T03:13:17.757Z] Copying: 615/1024 [MB] (45 MBps) [2024-12-05T03:13:18.698Z] Copying: 662/1024 [MB] (46 MBps) [2024-12-05T03:13:20.084Z] Copying: 695/1024 [MB] (32 MBps) [2024-12-05T03:13:20.656Z] Copying: 734/1024 [MB] (39 MBps) [2024-12-05T03:13:22.043Z] Copying: 769/1024 [MB] (34 MBps) [2024-12-05T03:13:22.988Z] Copying: 794/1024 [MB] (25 MBps) [2024-12-05T03:13:23.932Z] Copying: 828/1024 [MB] (33 MBps) [2024-12-05T03:13:24.877Z] Copying: 859/1024 [MB] (30 MBps) [2024-12-05T03:13:25.821Z] Copying: 886/1024 [MB] (26 MBps) [2024-12-05T03:13:26.765Z] Copying: 913/1024 [MB] (26 MBps) [2024-12-05T03:13:27.708Z] Copying: 940/1024 [MB] (27 MBps) [2024-12-05T03:13:28.651Z] Copying: 968/1024 [MB] (27 MBps) [2024-12-05T03:13:30.039Z] Copying: 996/1024 [MB] (27 MBps) [2024-12-05T03:13:30.039Z] Copying: 1021/1024 [MB] (25 MBps) [2024-12-05T03:13:31.424Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-05 03:13:31.234696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 03:13:31.234816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:00.580 [2024-12-05 03:13:31.234839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:00.580 [2024-12-05 03:13:31.234852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.580 [2024-12-05 03:13:31.234887] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:00.580 [2024-12-05 03:13:31.239031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 03:13:31.239096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:00.580 [2024-12-05 03:13:31.239112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:29:00.580 [2024-12-05 03:13:31.239125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.580 [2024-12-05 03:13:31.239451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 03:13:31.239467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:00.580 [2024-12-05 03:13:31.239480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:29:00.580 [2024-12-05 03:13:31.239491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.580 [2024-12-05 03:13:31.255552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 03:13:31.255611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:00.580 [2024-12-05 03:13:31.255626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.036 ms 00:29:00.580 [2024-12-05 03:13:31.255636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.580 [2024-12-05 03:13:31.261959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 03:13:31.262014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:00.580 [2024-12-05 03:13:31.262027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:29:00.580 [2024-12-05 03:13:31.262037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.289743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.289792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:00.581 [2024-12-05 03:13:31.289806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.644 ms 00:29:00.581 [2024-12-05 03:13:31.289815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.307040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.307106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:00.581 [2024-12-05 03:13:31.307119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.177 ms 00:29:00.581 [2024-12-05 03:13:31.307129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.311682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.311727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:00.581 [2024-12-05 03:13:31.311746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.500 ms 00:29:00.581 [2024-12-05 03:13:31.311756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.338100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.338144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:00.581 [2024-12-05 03:13:31.338156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.327 ms 00:29:00.581 [2024-12-05 03:13:31.338164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.363640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.363688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:00.581 [2024-12-05 03:13:31.363701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.431 ms 00:29:00.581 [2024-12-05 03:13:31.363710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.388867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.388911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:00.581 [2024-12-05 03:13:31.388923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.111 ms 00:29:00.581 [2024-12-05 03:13:31.388931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.413747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.413792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:00.581 [2024-12-05 03:13:31.413805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.723 ms 00:29:00.581 [2024-12-05 03:13:31.413813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.581 [2024-12-05 03:13:31.413856] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:00.581 [2024-12-05 03:13:31.413875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:00.581 [2024-12-05 03:13:31.413888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:00.581 [2024-12-05 03:13:31.413898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.413992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:00.581 [2024-12-05 03:13:31.414789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:00.581 [2024-12-05 03:13:31.414799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b01abfe-7442-4e7e-85e3-29dd0e69b26c 00:29:00.581 [2024-12-05 03:13:31.414808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:00.581 [2024-12-05 03:13:31.414819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 168128 00:29:00.581 [2024-12-05 03:13:31.414827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 166144 00:29:00.581 [2024-12-05 03:13:31.414836] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:29:00.581 [2024-12-05 03:13:31.414844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:00.581 [2024-12-05 03:13:31.414864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:00.581 [2024-12-05 03:13:31.414872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:00.581 [2024-12-05 03:13:31.414879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:00.581 [2024-12-05 03:13:31.414886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:00.581 [2024-12-05 03:13:31.414893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.581 [2024-12-05 03:13:31.414902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:00.581 [2024-12-05 03:13:31.414910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:29:00.581 [2024-12-05 03:13:31.414918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.429735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.842 [2024-12-05 03:13:31.429774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:00.842 [2024-12-05 03:13:31.429787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.796 ms 00:29:00.842 [2024-12-05 03:13:31.429796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.430264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.842 [2024-12-05 03:13:31.430288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:00.842 [2024-12-05 03:13:31.430300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:29:00.842 [2024-12-05 03:13:31.430316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.469505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.842 [2024-12-05 03:13:31.469554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:00.842 [2024-12-05 03:13:31.469567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.842 [2024-12-05 03:13:31.469576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.469645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.842 [2024-12-05 03:13:31.469655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:00.842 [2024-12-05 03:13:31.469665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.842 [2024-12-05 03:13:31.469680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.469781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.842 [2024-12-05 03:13:31.469795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:00.842 [2024-12-05 03:13:31.469805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.842 [2024-12-05 03:13:31.469815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.469834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.842 [2024-12-05 03:13:31.469843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:00.842 [2024-12-05 03:13:31.469851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.842 [2024-12-05 03:13:31.469859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.842 [2024-12-05 03:13:31.562446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.842 [2024-12-05 03:13:31.562508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:00.842 [2024-12-05 03:13:31.562523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.842 [2024-12-05 03:13:31.562533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:00.843 [2024-12-05 03:13:31.637152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:00.843 [2024-12-05 03:13:31.637279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:00.843 [2024-12-05 03:13:31.637397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:00.843 [2024-12-05 03:13:31.637546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:00.843 [2024-12-05 03:13:31.637618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:00.843 [2024-12-05 03:13:31.637704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:00.843 [2024-12-05 03:13:31.637788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:00.843 [2024-12-05 03:13:31.637798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:00.843 [2024-12-05 03:13:31.637807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.843 [2024-12-05 03:13:31.637975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.238 ms, result 0 00:29:01.786 00:29:01.786 00:29:01.786 03:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:03.704 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:03.704 03:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:03.704 [2024-12-05 03:13:34.532971] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:03.704 [2024-12-05 03:13:34.533119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82370 ] 00:29:03.965 [2024-12-05 03:13:34.692904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.226 [2024-12-05 03:13:34.810799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.488 [2024-12-05 03:13:35.108196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:04.488 [2024-12-05 03:13:35.108277] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:04.488 [2024-12-05 03:13:35.272342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.272411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:04.488 [2024-12-05 03:13:35.272426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:04.488 [2024-12-05 03:13:35.272435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.272491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.272505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:04.488 [2024-12-05 03:13:35.272515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:04.488 [2024-12-05 03:13:35.272524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.272544] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:04.488 [2024-12-05 03:13:35.273589] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:04.488 [2024-12-05 03:13:35.273647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.273657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:04.488 [2024-12-05 03:13:35.273668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:29:04.488 [2024-12-05 03:13:35.273676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.275605] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:04.488 [2024-12-05 03:13:35.289990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.290034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:04.488 [2024-12-05 03:13:35.290048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.388 ms 00:29:04.488 [2024-12-05 03:13:35.290057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.290156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.290168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:04.488 [2024-12-05 03:13:35.290177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:04.488 [2024-12-05 03:13:35.290185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.298577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.298618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:04.488 [2024-12-05 03:13:35.298631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.312 ms 00:29:04.488 [2024-12-05 03:13:35.298648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.298732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.298741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:04.488 [2024-12-05 03:13:35.298751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:04.488 [2024-12-05 03:13:35.298760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.298807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.298817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:04.488 [2024-12-05 03:13:35.298826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:04.488 [2024-12-05 03:13:35.298833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.298862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:04.488 [2024-12-05 03:13:35.303102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.303138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:04.488 [2024-12-05 03:13:35.303153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:29:04.488 [2024-12-05 03:13:35.303161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.303201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.303211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:04.488 [2024-12-05 03:13:35.303220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:04.488 [2024-12-05 03:13:35.303228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.303280] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:04.488 [2024-12-05 03:13:35.303306] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:04.488 [2024-12-05 03:13:35.303344] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:04.488 [2024-12-05 03:13:35.303363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:04.488 [2024-12-05 03:13:35.303474] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:04.488 [2024-12-05 03:13:35.303485] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:04.488 [2024-12-05 03:13:35.303496] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:04.488 [2024-12-05 03:13:35.303506] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303516] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303524] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:04.488 [2024-12-05 03:13:35.303532] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:04.488 [2024-12-05 03:13:35.303543] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:04.488 [2024-12-05 03:13:35.303551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:04.488 [2024-12-05 03:13:35.303559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.303568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:04.488 [2024-12-05 03:13:35.303576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:29:04.488 [2024-12-05 03:13:35.303584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.303667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.488 [2024-12-05 03:13:35.303676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:04.488 [2024-12-05 03:13:35.303684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:04.488 [2024-12-05 03:13:35.303693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.488 [2024-12-05 03:13:35.303804] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:04.488 [2024-12-05 03:13:35.303815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:04.488 [2024-12-05 03:13:35.303823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:04.488 [2024-12-05 03:13:35.303846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:04.488 [2024-12-05 03:13:35.303870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:04.488 [2024-12-05 03:13:35.303884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:04.488 [2024-12-05 03:13:35.303891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:04.488 [2024-12-05 03:13:35.303898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:04.488 [2024-12-05 03:13:35.303910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:04.488 [2024-12-05 03:13:35.303919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:04.488 [2024-12-05 03:13:35.303926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:04.488 [2024-12-05 03:13:35.303940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:04.488 [2024-12-05 03:13:35.303960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:04.488 [2024-12-05 03:13:35.303979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:04.488 [2024-12-05 03:13:35.303986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:04.488 [2024-12-05 03:13:35.303992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:04.488 [2024-12-05 03:13:35.303999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:04.488 [2024-12-05 03:13:35.304005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:04.488 [2024-12-05 03:13:35.304011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:04.488 [2024-12-05 03:13:35.304019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:04.488 [2024-12-05 03:13:35.304025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:04.488 [2024-12-05 03:13:35.304032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:04.488 [2024-12-05 03:13:35.304038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:04.488 [2024-12-05 03:13:35.304045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:04.488 [2024-12-05 03:13:35.304051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:04.488 [2024-12-05 03:13:35.304057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:04.488 [2024-12-05 03:13:35.304064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:04.488 [2024-12-05 03:13:35.304093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:04.489 [2024-12-05 03:13:35.304100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:04.489 [2024-12-05 03:13:35.304107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:04.489 [2024-12-05 03:13:35.304114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:04.489 [2024-12-05 03:13:35.304120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:04.489 [2024-12-05 03:13:35.304128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:04.489 [2024-12-05 03:13:35.304136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:04.489 [2024-12-05 03:13:35.304144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:04.489 [2024-12-05 03:13:35.304152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:04.489 [2024-12-05 03:13:35.304160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:04.489 [2024-12-05 03:13:35.304169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:04.489 [2024-12-05 03:13:35.304176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:04.489 [2024-12-05 03:13:35.304184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:04.489 [2024-12-05 03:13:35.304190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:04.489 [2024-12-05 03:13:35.304198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:04.489 [2024-12-05 03:13:35.304205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:04.489 [2024-12-05 03:13:35.304216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:04.489 [2024-12-05 03:13:35.304226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:04.489 [2024-12-05 03:13:35.304247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:04.489 [2024-12-05 03:13:35.304254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:04.489 [2024-12-05 03:13:35.304261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:04.489 [2024-12-05 03:13:35.304269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:04.489 [2024-12-05 03:13:35.304276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:04.489 [2024-12-05 03:13:35.304283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:04.489 [2024-12-05 03:13:35.304291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:04.489 [2024-12-05 03:13:35.304299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:04.489 [2024-12-05 03:13:35.304306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:04.489 [2024-12-05 03:13:35.304341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:04.489 [2024-12-05 03:13:35.304350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:04.489 [2024-12-05 03:13:35.304365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:04.489 [2024-12-05 03:13:35.304371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:04.489 [2024-12-05 03:13:35.304378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:04.489 [2024-12-05 03:13:35.304391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.489 [2024-12-05 03:13:35.304399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:04.489 [2024-12-05 03:13:35.304407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:29:04.489 [2024-12-05 03:13:35.304416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.336923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.336964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:04.750 [2024-12-05 03:13:35.336976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.462 ms 00:29:04.750 [2024-12-05 03:13:35.336988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.337092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.337102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:04.750 [2024-12-05 03:13:35.337111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:04.750 [2024-12-05 03:13:35.337119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.384063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.384136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:04.750 [2024-12-05 03:13:35.384150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.884 ms 00:29:04.750 [2024-12-05 03:13:35.384159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.384212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.384222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:04.750 [2024-12-05 03:13:35.384236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:04.750 [2024-12-05 03:13:35.384244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.384859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.384883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:04.750 [2024-12-05 03:13:35.384895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:29:04.750 [2024-12-05 03:13:35.384903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.385063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.385097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:04.750 [2024-12-05 03:13:35.385113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:29:04.750 [2024-12-05 03:13:35.385121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.400964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.401011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:04.750 [2024-12-05 03:13:35.401023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.822 ms 00:29:04.750 [2024-12-05 03:13:35.401031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.415341] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:04.750 [2024-12-05 03:13:35.415385] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:04.750 [2024-12-05 03:13:35.415400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.415410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:04.750 [2024-12-05 03:13:35.415420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.238 ms 00:29:04.750 [2024-12-05 03:13:35.415427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.441411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.441457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:04.750 [2024-12-05 03:13:35.441469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.927 ms 00:29:04.750 [2024-12-05 03:13:35.441477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.454531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.454574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:04.750 [2024-12-05 03:13:35.454586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.986 ms 00:29:04.750 [2024-12-05 03:13:35.454595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.467393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.467434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:04.750 [2024-12-05 03:13:35.467446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.750 ms 00:29:04.750 [2024-12-05 03:13:35.467454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.468112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.468138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:04.750 [2024-12-05 03:13:35.468153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:29:04.750 [2024-12-05 03:13:35.468161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.536900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.536951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:04.750 [2024-12-05 03:13:35.536975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.718 ms 00:29:04.750 [2024-12-05 03:13:35.536985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.548513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:04.750 [2024-12-05 03:13:35.551594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.551635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:04.750 [2024-12-05 03:13:35.551647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.553 ms 00:29:04.750 [2024-12-05 03:13:35.551656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.551737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.551748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:04.750 [2024-12-05 03:13:35.551760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:04.750 [2024-12-05 03:13:35.551769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.552606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.552644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:04.750 [2024-12-05 03:13:35.552656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:29:04.750 [2024-12-05 03:13:35.552666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.552695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.552704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:04.750 [2024-12-05 03:13:35.552714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:04.750 [2024-12-05 03:13:35.552723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.552769] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:04.750 [2024-12-05 03:13:35.552781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.552790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:04.750 [2024-12-05 03:13:35.552800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:04.750 [2024-12-05 03:13:35.552808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.578424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.578469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:04.750 [2024-12-05 03:13:35.578488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.596 ms 00:29:04.750 [2024-12-05 03:13:35.578497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.578581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.750 [2024-12-05 03:13:35.578592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:04.750 [2024-12-05 03:13:35.578601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:04.750 [2024-12-05 03:13:35.578609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.750 [2024-12-05 03:13:35.580032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.186 ms, result 0 00:29:06.136  [2024-12-05T03:13:37.925Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T03:13:38.868Z] Copying: 26/1024 [MB] (14 MBps) [2024-12-05T03:13:39.811Z] Copying: 39/1024 [MB] (13 MBps) [2024-12-05T03:13:40.764Z] Copying: 58/1024 [MB] (19 MBps) [2024-12-05T03:13:41.768Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-05T03:13:43.176Z] Copying: 94/1024 [MB] (20 MBps) [2024-12-05T03:13:44.122Z] Copying: 113/1024 [MB] (19 MBps) [2024-12-05T03:13:45.066Z] Copying: 131/1024 [MB] (17 MBps) [2024-12-05T03:13:46.009Z] Copying: 146/1024 [MB] (15 MBps) [2024-12-05T03:13:46.952Z] Copying: 162/1024 [MB] (16 MBps) [2024-12-05T03:13:47.896Z] Copying: 180/1024 [MB] (17 MBps) [2024-12-05T03:13:48.842Z] Copying: 196/1024 [MB] (15 MBps) [2024-12-05T03:13:49.787Z] Copying: 212/1024 [MB] (16 MBps) [2024-12-05T03:13:51.176Z] Copying: 224/1024 [MB] (12 MBps) [2024-12-05T03:13:52.118Z] Copying: 241/1024 [MB] (16 MBps) [2024-12-05T03:13:53.058Z] Copying: 258/1024 [MB] (17 MBps) [2024-12-05T03:13:54.002Z] Copying: 271/1024 [MB] (13 MBps) [2024-12-05T03:13:54.944Z] Copying: 282/1024 [MB] (10 MBps) [2024-12-05T03:13:55.902Z] Copying: 295/1024 [MB] (13 MBps) [2024-12-05T03:13:56.844Z] Copying: 310/1024 [MB] (14 MBps) [2024-12-05T03:13:57.787Z] Copying: 325/1024 [MB] (14 MBps) [2024-12-05T03:13:59.174Z] Copying: 335/1024 [MB] (10 MBps) [2024-12-05T03:14:00.119Z] Copying: 346/1024 [MB] (10 MBps) [2024-12-05T03:14:01.065Z] Copying: 364/1024 [MB] (18 MBps) [2024-12-05T03:14:02.009Z] Copying: 379/1024 [MB] (15 MBps) [2024-12-05T03:14:02.956Z] Copying: 406/1024 [MB] (27 MBps) [2024-12-05T03:14:03.903Z] Copying: 417/1024 [MB] (10 MBps) [2024-12-05T03:14:04.846Z] Copying: 435/1024 [MB] (17 MBps) [2024-12-05T03:14:05.808Z] Copying: 447/1024 [MB] (11 MBps) [2024-12-05T03:14:07.198Z] Copying: 458/1024 [MB] (10 MBps) [2024-12-05T03:14:07.771Z] Copying: 469/1024 [MB] (11 MBps) [2024-12-05T03:14:09.161Z] Copying: 482/1024 [MB] (12 MBps) [2024-12-05T03:14:10.107Z] Copying: 506/1024 [MB] (23 MBps) [2024-12-05T03:14:11.053Z] Copying: 521/1024 [MB] (15 MBps) [2024-12-05T03:14:11.999Z] Copying: 538/1024 [MB] (17 MBps) [2024-12-05T03:14:12.990Z] Copying: 552/1024 [MB] (13 MBps) [2024-12-05T03:14:13.973Z] Copying: 571/1024 [MB] (19 MBps) [2024-12-05T03:14:14.918Z] Copying: 589/1024 [MB] (17 MBps) [2024-12-05T03:14:15.860Z] Copying: 603/1024 [MB] (14 MBps) [2024-12-05T03:14:16.803Z] Copying: 626/1024 [MB] (23 MBps) [2024-12-05T03:14:18.188Z] Copying: 645/1024 [MB] (18 MBps) [2024-12-05T03:14:18.760Z] Copying: 671/1024 [MB] (26 MBps) [2024-12-05T03:14:20.149Z] Copying: 689/1024 [MB] (18 MBps) [2024-12-05T03:14:21.093Z] Copying: 705/1024 [MB] (15 MBps) [2024-12-05T03:14:22.038Z] Copying: 725/1024 [MB] (20 MBps) [2024-12-05T03:14:22.982Z] Copying: 742/1024 [MB] (16 MBps) [2024-12-05T03:14:23.927Z] Copying: 755/1024 [MB] (13 MBps) [2024-12-05T03:14:24.872Z] Copying: 774/1024 [MB] (18 MBps) [2024-12-05T03:14:25.815Z] Copying: 790/1024 [MB] (16 MBps) [2024-12-05T03:14:26.760Z] Copying: 805/1024 [MB] (14 MBps) [2024-12-05T03:14:28.198Z] Copying: 827/1024 [MB] (21 MBps) [2024-12-05T03:14:28.769Z] Copying: 842/1024 [MB] (15 MBps) [2024-12-05T03:14:30.155Z] Copying: 857/1024 [MB] (14 MBps) [2024-12-05T03:14:31.099Z] Copying: 878/1024 [MB] (21 MBps) [2024-12-05T03:14:32.046Z] Copying: 897/1024 [MB] (18 MBps) [2024-12-05T03:14:32.991Z] Copying: 914/1024 [MB] (16 MBps) [2024-12-05T03:14:33.935Z] Copying: 925/1024 [MB] (10 MBps) [2024-12-05T03:14:34.877Z] Copying: 940/1024 [MB] (15 MBps) [2024-12-05T03:14:35.818Z] Copying: 959/1024 [MB] (19 MBps) [2024-12-05T03:14:37.202Z] Copying: 979/1024 [MB] (19 MBps) [2024-12-05T03:14:37.775Z] Copying: 995/1024 [MB] (16 MBps) [2024-12-05T03:14:38.037Z] Copying: 1015/1024 [MB] (20 MBps) [2024-12-05T03:14:38.609Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 03:14:38.398389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.765 [2024-12-05 03:14:38.398438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:07.765 [2024-12-05 03:14:38.398450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:07.765 [2024-12-05 03:14:38.398457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.765 [2024-12-05 03:14:38.398474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:07.765 [2024-12-05 03:14:38.400556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.765 [2024-12-05 03:14:38.400589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:07.765 [2024-12-05 03:14:38.400597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:30:07.765 [2024-12-05 03:14:38.400605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.765 [2024-12-05 03:14:38.400781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.765 [2024-12-05 03:14:38.400793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:07.765 [2024-12-05 03:14:38.400801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:30:07.765 [2024-12-05 03:14:38.400807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.765 [2024-12-05 03:14:38.403745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.765 [2024-12-05 03:14:38.403765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:07.765 [2024-12-05 03:14:38.403773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:30:07.765 [2024-12-05 03:14:38.403783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.765 [2024-12-05 03:14:38.409016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.765 [2024-12-05 03:14:38.409042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:07.765 [2024-12-05 03:14:38.409050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:30:07.766 [2024-12-05 03:14:38.409057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.430646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.430677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:07.766 [2024-12-05 03:14:38.430685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.537 ms 00:30:07.766 [2024-12-05 03:14:38.430691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.442365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.442393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:07.766 [2024-12-05 03:14:38.442402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.646 ms 00:30:07.766 [2024-12-05 03:14:38.442409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.444580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.444617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:07.766 [2024-12-05 03:14:38.444627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:30:07.766 [2024-12-05 03:14:38.444633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.462693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.462720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:07.766 [2024-12-05 03:14:38.462728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.048 ms 00:30:07.766 [2024-12-05 03:14:38.462735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.479894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.479919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:07.766 [2024-12-05 03:14:38.479927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.136 ms 00:30:07.766 [2024-12-05 03:14:38.479933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.496857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.496883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:07.766 [2024-12-05 03:14:38.496890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.900 ms 00:30:07.766 [2024-12-05 03:14:38.496896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.513591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.766 [2024-12-05 03:14:38.513617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:07.766 [2024-12-05 03:14:38.513624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.653 ms 00:30:07.766 [2024-12-05 03:14:38.513630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.766 [2024-12-05 03:14:38.513653] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:07.766 [2024-12-05 03:14:38.513669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:07.766 [2024-12-05 03:14:38.513678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:07.766 [2024-12-05 03:14:38.513684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:07.766 [2024-12-05 03:14:38.513903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.513997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:07.767 [2024-12-05 03:14:38.514254] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:07.767 [2024-12-05 03:14:38.514259] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b01abfe-7442-4e7e-85e3-29dd0e69b26c 00:30:07.767 [2024-12-05 03:14:38.514265] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:07.767 [2024-12-05 03:14:38.514270] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:07.767 [2024-12-05 03:14:38.514276] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:07.767 [2024-12-05 03:14:38.514281] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:07.767 [2024-12-05 03:14:38.514291] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:07.767 [2024-12-05 03:14:38.514297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:07.767 [2024-12-05 03:14:38.514303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:07.767 [2024-12-05 03:14:38.514307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:07.767 [2024-12-05 03:14:38.514313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:07.767 [2024-12-05 03:14:38.514318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.767 [2024-12-05 03:14:38.514324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:07.767 [2024-12-05 03:14:38.514330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:30:07.768 [2024-12-05 03:14:38.514338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.768 [2024-12-05 03:14:38.523532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.768 [2024-12-05 03:14:38.523555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:07.768 [2024-12-05 03:14:38.523563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.182 ms 00:30:07.768 [2024-12-05 03:14:38.523569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.768 [2024-12-05 03:14:38.523834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.768 [2024-12-05 03:14:38.523852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:07.768 [2024-12-05 03:14:38.523859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:30:07.768 [2024-12-05 03:14:38.523864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.768 [2024-12-05 03:14:38.549223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.768 [2024-12-05 03:14:38.549250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:07.768 [2024-12-05 03:14:38.549257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.768 [2024-12-05 03:14:38.549263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.768 [2024-12-05 03:14:38.549298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.768 [2024-12-05 03:14:38.549306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:07.768 [2024-12-05 03:14:38.549312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.768 [2024-12-05 03:14:38.549318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.768 [2024-12-05 03:14:38.549360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.768 [2024-12-05 03:14:38.549367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:07.768 [2024-12-05 03:14:38.549373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.768 [2024-12-05 03:14:38.549378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.768 [2024-12-05 03:14:38.549389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.768 [2024-12-05 03:14:38.549395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:07.768 [2024-12-05 03:14:38.549404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.768 [2024-12-05 03:14:38.549409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.607360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.607397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.029 [2024-12-05 03:14:38.607406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.607412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.029 [2024-12-05 03:14:38.655419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.029 [2024-12-05 03:14:38.655487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.029 [2024-12-05 03:14:38.655531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.029 [2024-12-05 03:14:38.655620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:08.029 [2024-12-05 03:14:38.655661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.029 [2024-12-05 03:14:38.655708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.029 [2024-12-05 03:14:38.655756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.029 [2024-12-05 03:14:38.655762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.029 [2024-12-05 03:14:38.655770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.029 [2024-12-05 03:14:38.655858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.448 ms, result 0 00:30:08.601 00:30:08.601 00:30:08.601 03:14:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:10.519 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:10.519 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:10.519 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:10.519 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:10.519 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80475 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80475 ']' 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80475 00:30:10.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80475) - No such process 00:30:10.780 Process with pid 80475 is not found 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80475 is not found' 00:30:10.780 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:11.041 Remove shared memory files 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:11.041 00:30:11.041 real 4m2.446s 00:30:11.041 user 4m26.162s 00:30:11.041 sys 0m27.042s 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.041 ************************************ 00:30:11.041 END TEST ftl_dirty_shutdown 00:30:11.041 ************************************ 00:30:11.041 03:14:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.041 03:14:41 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:11.041 03:14:41 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:11.041 03:14:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.041 03:14:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:11.041 ************************************ 00:30:11.041 START TEST ftl_upgrade_shutdown 00:30:11.041 ************************************ 00:30:11.041 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:11.303 * Looking for test storage... 00:30:11.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.304 --rc genhtml_branch_coverage=1 00:30:11.304 --rc genhtml_function_coverage=1 00:30:11.304 --rc genhtml_legend=1 00:30:11.304 --rc geninfo_all_blocks=1 00:30:11.304 --rc geninfo_unexecuted_blocks=1 00:30:11.304 00:30:11.304 ' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.304 --rc genhtml_branch_coverage=1 00:30:11.304 --rc genhtml_function_coverage=1 00:30:11.304 --rc genhtml_legend=1 00:30:11.304 --rc geninfo_all_blocks=1 00:30:11.304 --rc geninfo_unexecuted_blocks=1 00:30:11.304 00:30:11.304 ' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.304 --rc genhtml_branch_coverage=1 00:30:11.304 --rc genhtml_function_coverage=1 00:30:11.304 --rc genhtml_legend=1 00:30:11.304 --rc geninfo_all_blocks=1 00:30:11.304 --rc geninfo_unexecuted_blocks=1 00:30:11.304 00:30:11.304 ' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:11.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.304 --rc genhtml_branch_coverage=1 00:30:11.304 --rc genhtml_function_coverage=1 00:30:11.304 --rc genhtml_legend=1 00:30:11.304 --rc geninfo_all_blocks=1 00:30:11.304 --rc geninfo_unexecuted_blocks=1 00:30:11.304 00:30:11.304 ' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83117 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83117 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83117 ']' 00:30:11.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.304 03:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:11.304 [2024-12-05 03:14:42.075887] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:11.304 [2024-12-05 03:14:42.076387] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83117 ] 00:30:11.566 [2024-12-05 03:14:42.239532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.566 [2024-12-05 03:14:42.357228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.510 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:12.511 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:12.772 { 00:30:12.772 "name": "basen1", 00:30:12.772 "aliases": [ 00:30:12.772 "69cd81d4-33f2-4557-8ebc-06dbe844fcab" 00:30:12.772 ], 00:30:12.772 "product_name": "NVMe disk", 00:30:12.772 "block_size": 4096, 00:30:12.772 "num_blocks": 1310720, 00:30:12.772 "uuid": "69cd81d4-33f2-4557-8ebc-06dbe844fcab", 00:30:12.772 "numa_id": -1, 00:30:12.772 "assigned_rate_limits": { 00:30:12.772 "rw_ios_per_sec": 0, 00:30:12.772 "rw_mbytes_per_sec": 0, 00:30:12.772 "r_mbytes_per_sec": 0, 00:30:12.772 "w_mbytes_per_sec": 0 00:30:12.772 }, 00:30:12.772 "claimed": true, 00:30:12.772 "claim_type": "read_many_write_one", 00:30:12.772 "zoned": false, 00:30:12.772 "supported_io_types": { 00:30:12.772 "read": true, 00:30:12.772 "write": true, 00:30:12.772 "unmap": true, 00:30:12.772 "flush": true, 00:30:12.772 "reset": true, 00:30:12.772 "nvme_admin": true, 00:30:12.772 "nvme_io": true, 00:30:12.772 "nvme_io_md": false, 00:30:12.772 "write_zeroes": true, 00:30:12.772 "zcopy": false, 00:30:12.772 "get_zone_info": false, 00:30:12.772 "zone_management": false, 00:30:12.772 "zone_append": false, 00:30:12.772 "compare": true, 00:30:12.772 "compare_and_write": false, 00:30:12.772 "abort": true, 00:30:12.772 "seek_hole": false, 00:30:12.772 "seek_data": false, 00:30:12.772 "copy": true, 00:30:12.772 "nvme_iov_md": false 00:30:12.772 }, 00:30:12.772 "driver_specific": { 00:30:12.772 "nvme": [ 00:30:12.772 { 00:30:12.772 "pci_address": "0000:00:11.0", 00:30:12.772 "trid": { 00:30:12.772 "trtype": "PCIe", 00:30:12.772 "traddr": "0000:00:11.0" 00:30:12.772 }, 00:30:12.772 "ctrlr_data": { 00:30:12.772 "cntlid": 0, 00:30:12.772 "vendor_id": "0x1b36", 00:30:12.772 "model_number": "QEMU NVMe Ctrl", 00:30:12.772 "serial_number": "12341", 00:30:12.772 "firmware_revision": "8.0.0", 00:30:12.772 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:12.772 "oacs": { 00:30:12.772 "security": 0, 00:30:12.772 "format": 1, 00:30:12.772 "firmware": 0, 00:30:12.772 "ns_manage": 1 00:30:12.772 }, 00:30:12.772 "multi_ctrlr": false, 00:30:12.772 "ana_reporting": false 00:30:12.772 }, 00:30:12.772 "vs": { 00:30:12.772 "nvme_version": "1.4" 00:30:12.772 }, 00:30:12.772 "ns_data": { 00:30:12.772 "id": 1, 00:30:12.772 "can_share": false 00:30:12.772 } 00:30:12.772 } 00:30:12.772 ], 00:30:12.772 "mp_policy": "active_passive" 00:30:12.772 } 00:30:12.772 } 00:30:12.772 ]' 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:12.772 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:13.033 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=02b5c5cc-bfca-4040-8f62-8f72ab0143da 00:30:13.033 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:13.033 03:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02b5c5cc-bfca-4040-8f62-8f72ab0143da 00:30:13.374 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=99de7fa5-6cb9-443c-aa83-1dda50c46886 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 99de7fa5-6cb9-443c-aa83-1dda50c46886 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8a858836-3276-4a54-877f-ec578deadf70 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8a858836-3276-4a54-877f-ec578deadf70 ]] 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8a858836-3276-4a54-877f-ec578deadf70 5120 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8a858836-3276-4a54-877f-ec578deadf70 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8a858836-3276-4a54-877f-ec578deadf70 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8a858836-3276-4a54-877f-ec578deadf70 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:13.664 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:13.665 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:13.665 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a858836-3276-4a54-877f-ec578deadf70 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:13.930 { 00:30:13.930 "name": "8a858836-3276-4a54-877f-ec578deadf70", 00:30:13.930 "aliases": [ 00:30:13.930 "lvs/basen1p0" 00:30:13.930 ], 00:30:13.930 "product_name": "Logical Volume", 00:30:13.930 "block_size": 4096, 00:30:13.930 "num_blocks": 5242880, 00:30:13.930 "uuid": "8a858836-3276-4a54-877f-ec578deadf70", 00:30:13.930 "assigned_rate_limits": { 00:30:13.930 "rw_ios_per_sec": 0, 00:30:13.930 "rw_mbytes_per_sec": 0, 00:30:13.930 "r_mbytes_per_sec": 0, 00:30:13.930 "w_mbytes_per_sec": 0 00:30:13.930 }, 00:30:13.930 "claimed": false, 00:30:13.930 "zoned": false, 00:30:13.930 "supported_io_types": { 00:30:13.930 "read": true, 00:30:13.930 "write": true, 00:30:13.930 "unmap": true, 00:30:13.930 "flush": false, 00:30:13.930 "reset": true, 00:30:13.930 "nvme_admin": false, 00:30:13.930 "nvme_io": false, 00:30:13.930 "nvme_io_md": false, 00:30:13.930 "write_zeroes": true, 00:30:13.930 "zcopy": false, 00:30:13.930 "get_zone_info": false, 00:30:13.930 "zone_management": false, 00:30:13.930 "zone_append": false, 00:30:13.930 "compare": false, 00:30:13.930 "compare_and_write": false, 00:30:13.930 "abort": false, 00:30:13.930 "seek_hole": true, 00:30:13.930 "seek_data": true, 00:30:13.930 "copy": false, 00:30:13.930 "nvme_iov_md": false 00:30:13.930 }, 00:30:13.930 "driver_specific": { 00:30:13.930 "lvol": { 00:30:13.930 "lvol_store_uuid": "99de7fa5-6cb9-443c-aa83-1dda50c46886", 00:30:13.930 "base_bdev": "basen1", 00:30:13.930 "thin_provision": true, 00:30:13.930 "num_allocated_clusters": 0, 00:30:13.930 "snapshot": false, 00:30:13.930 "clone": false, 00:30:13.930 "esnap_clone": false 00:30:13.930 } 00:30:13.930 } 00:30:13.930 } 00:30:13.930 ]' 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:13.930 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:14.190 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:14.190 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:14.190 03:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:14.458 03:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:14.458 03:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:14.458 03:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8a858836-3276-4a54-877f-ec578deadf70 -c cachen1p0 --l2p_dram_limit 2 00:30:14.722 [2024-12-05 03:14:45.362215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.722 [2024-12-05 03:14:45.362254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:14.722 [2024-12-05 03:14:45.362266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:14.722 [2024-12-05 03:14:45.362273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.722 [2024-12-05 03:14:45.362316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.722 [2024-12-05 03:14:45.362324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:14.722 [2024-12-05 03:14:45.362332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:14.722 [2024-12-05 03:14:45.362338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.722 [2024-12-05 03:14:45.362354] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:14.722 [2024-12-05 03:14:45.362866] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:14.722 [2024-12-05 03:14:45.362888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.722 [2024-12-05 03:14:45.362894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:14.722 [2024-12-05 03:14:45.362904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:30:14.722 [2024-12-05 03:14:45.362910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.722 [2024-12-05 03:14:45.362957] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 622a0b7e-690c-48b4-a5e7-caff1c5488be 00:30:14.723 [2024-12-05 03:14:45.363882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.363911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:14.723 [2024-12-05 03:14:45.363918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:14.723 [2024-12-05 03:14:45.363926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.368554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.368584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:14.723 [2024-12-05 03:14:45.368591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.600 ms 00:30:14.723 [2024-12-05 03:14:45.368598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.368630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.368638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:14.723 [2024-12-05 03:14:45.368644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:14.723 [2024-12-05 03:14:45.368653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.368693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.368703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:14.723 [2024-12-05 03:14:45.368712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:14.723 [2024-12-05 03:14:45.368719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.368735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:14.723 [2024-12-05 03:14:45.371600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.371625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:14.723 [2024-12-05 03:14:45.371635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.868 ms 00:30:14.723 [2024-12-05 03:14:45.371641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.371662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.371669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:14.723 [2024-12-05 03:14:45.371676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.723 [2024-12-05 03:14:45.371682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.371696] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:14.723 [2024-12-05 03:14:45.371804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:14.723 [2024-12-05 03:14:45.371815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:14.723 [2024-12-05 03:14:45.371824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:14.723 [2024-12-05 03:14:45.371832] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:14.723 [2024-12-05 03:14:45.371839] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:14.723 [2024-12-05 03:14:45.371846] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:14.723 [2024-12-05 03:14:45.371852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:14.723 [2024-12-05 03:14:45.371861] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:14.723 [2024-12-05 03:14:45.371866] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:14.723 [2024-12-05 03:14:45.371873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.371879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:14.723 [2024-12-05 03:14:45.371886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:30:14.723 [2024-12-05 03:14:45.371892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.371958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.723 [2024-12-05 03:14:45.371968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:14.723 [2024-12-05 03:14:45.371975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:14.723 [2024-12-05 03:14:45.371980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.723 [2024-12-05 03:14:45.372058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:14.723 [2024-12-05 03:14:45.372065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:14.723 [2024-12-05 03:14:45.372083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:14.723 [2024-12-05 03:14:45.372101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:14.723 [2024-12-05 03:14:45.372113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:14.723 [2024-12-05 03:14:45.372120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:14.723 [2024-12-05 03:14:45.372125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:14.723 [2024-12-05 03:14:45.372138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:14.723 [2024-12-05 03:14:45.372144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:14.723 [2024-12-05 03:14:45.372156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:14.723 [2024-12-05 03:14:45.372160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:14.723 [2024-12-05 03:14:45.372173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:14.723 [2024-12-05 03:14:45.372179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:14.723 [2024-12-05 03:14:45.372190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:14.723 [2024-12-05 03:14:45.372195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:14.723 [2024-12-05 03:14:45.372207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:14.723 [2024-12-05 03:14:45.372213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:14.723 [2024-12-05 03:14:45.372224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:14.723 [2024-12-05 03:14:45.372229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:14.723 [2024-12-05 03:14:45.372240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:14.723 [2024-12-05 03:14:45.372247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:14.723 [2024-12-05 03:14:45.372259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:14.723 [2024-12-05 03:14:45.372264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:14.723 [2024-12-05 03:14:45.372277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:14.723 [2024-12-05 03:14:45.372296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:14.723 [2024-12-05 03:14:45.372312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:14.723 [2024-12-05 03:14:45.372318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372323] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:14.723 [2024-12-05 03:14:45.372330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:14.723 [2024-12-05 03:14:45.372335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.723 [2024-12-05 03:14:45.372347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:14.723 [2024-12-05 03:14:45.372354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:14.723 [2024-12-05 03:14:45.372360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:14.723 [2024-12-05 03:14:45.372366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:14.723 [2024-12-05 03:14:45.372371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:14.723 [2024-12-05 03:14:45.372377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:14.723 [2024-12-05 03:14:45.372384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:14.723 [2024-12-05 03:14:45.372393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.723 [2024-12-05 03:14:45.372400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:14.723 [2024-12-05 03:14:45.372407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:14.723 [2024-12-05 03:14:45.372412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:14.723 [2024-12-05 03:14:45.372418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:14.723 [2024-12-05 03:14:45.372423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:14.724 [2024-12-05 03:14:45.372430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:14.724 [2024-12-05 03:14:45.372435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:14.724 [2024-12-05 03:14:45.372443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:14.724 [2024-12-05 03:14:45.372487] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:14.724 [2024-12-05 03:14:45.372494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:14.724 [2024-12-05 03:14:45.372507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:14.724 [2024-12-05 03:14:45.372513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:14.724 [2024-12-05 03:14:45.372519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:14.724 [2024-12-05 03:14:45.372525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.724 [2024-12-05 03:14:45.372531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:14.724 [2024-12-05 03:14:45.372537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:30:14.724 [2024-12-05 03:14:45.372544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.724 [2024-12-05 03:14:45.372571] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:14.724 [2024-12-05 03:14:45.372581] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:18.023 [2024-12-05 03:14:48.659379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.659463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:18.023 [2024-12-05 03:14:48.659482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3286.792 ms 00:30:18.023 [2024-12-05 03:14:48.659494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.690463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.690537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:18.023 [2024-12-05 03:14:48.690552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.723 ms 00:30:18.023 [2024-12-05 03:14:48.690563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.690651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.690665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:18.023 [2024-12-05 03:14:48.690674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:18.023 [2024-12-05 03:14:48.690692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.725722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.725777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:18.023 [2024-12-05 03:14:48.725790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.995 ms 00:30:18.023 [2024-12-05 03:14:48.725802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.725837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.725851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:18.023 [2024-12-05 03:14:48.725860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:18.023 [2024-12-05 03:14:48.725870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.726497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.726538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:18.023 [2024-12-05 03:14:48.726557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:30:18.023 [2024-12-05 03:14:48.726569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.726614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.726626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:18.023 [2024-12-05 03:14:48.726637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:18.023 [2024-12-05 03:14:48.726650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.743936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.743991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:18.023 [2024-12-05 03:14:48.744003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.266 ms 00:30:18.023 [2024-12-05 03:14:48.744013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.768251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:18.023 [2024-12-05 03:14:48.769753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.769801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:18.023 [2024-12-05 03:14:48.769817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.637 ms 00:30:18.023 [2024-12-05 03:14:48.769827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.799382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.799447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:18.023 [2024-12-05 03:14:48.799464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.507 ms 00:30:18.023 [2024-12-05 03:14:48.799473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.799583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.799598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:18.023 [2024-12-05 03:14:48.799613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:18.023 [2024-12-05 03:14:48.799623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.825111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.825157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:18.023 [2024-12-05 03:14:48.825174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.429 ms 00:30:18.023 [2024-12-05 03:14:48.825182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.849987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.850036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:18.023 [2024-12-05 03:14:48.850051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.744 ms 00:30:18.023 [2024-12-05 03:14:48.850058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.023 [2024-12-05 03:14:48.850665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.023 [2024-12-05 03:14:48.850696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:18.023 [2024-12-05 03:14:48.850709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:30:18.023 [2024-12-05 03:14:48.850720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:48.938540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.285 [2024-12-05 03:14:48.938590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:18.285 [2024-12-05 03:14:48.938610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.774 ms 00:30:18.285 [2024-12-05 03:14:48.938620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:48.966044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.285 [2024-12-05 03:14:48.966103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:18.285 [2024-12-05 03:14:48.966120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.329 ms 00:30:18.285 [2024-12-05 03:14:48.966128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:48.991751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.285 [2024-12-05 03:14:48.991798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:18.285 [2024-12-05 03:14:48.991814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.569 ms 00:30:18.285 [2024-12-05 03:14:48.991822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:49.017654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.285 [2024-12-05 03:14:49.017700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:18.285 [2024-12-05 03:14:49.017714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.781 ms 00:30:18.285 [2024-12-05 03:14:49.017722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:49.017780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.285 [2024-12-05 03:14:49.017790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:18.285 [2024-12-05 03:14:49.017805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:18.285 [2024-12-05 03:14:49.017813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:49.017901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.285 [2024-12-05 03:14:49.017915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:18.285 [2024-12-05 03:14:49.017925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:18.285 [2024-12-05 03:14:49.017933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.285 [2024-12-05 03:14:49.019223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3656.479 ms, result 0 00:30:18.285 { 00:30:18.285 "name": "ftl", 00:30:18.285 "uuid": "622a0b7e-690c-48b4-a5e7-caff1c5488be" 00:30:18.285 } 00:30:18.285 03:14:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:18.547 [2024-12-05 03:14:49.238237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.547 03:14:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:18.808 03:14:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:19.070 [2024-12-05 03:14:49.670622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:19.070 03:14:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:19.070 [2024-12-05 03:14:49.887722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:19.070 03:14:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:19.644 Fill FTL, iteration 1 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83235 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83235 /var/tmp/spdk.tgt.sock 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83235 ']' 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.644 03:14:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:19.644 [2024-12-05 03:14:50.329909] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:19.644 [2024-12-05 03:14:50.330019] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83235 ] 00:30:19.905 [2024-12-05 03:14:50.488641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.905 [2024-12-05 03:14:50.581015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.477 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.477 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:20.477 03:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:20.738 ftln1 00:30:20.738 03:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:20.739 03:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83235 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83235 ']' 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83235 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83235 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.000 killing process with pid 83235 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83235' 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83235 00:30:21.000 03:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83235 00:30:22.386 03:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:22.386 03:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:22.647 [2024-12-05 03:14:53.239185] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:22.647 [2024-12-05 03:14:53.239301] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83282 ] 00:30:22.647 [2024-12-05 03:14:53.397062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.908 [2024-12-05 03:14:53.500905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.292  [2024-12-05T03:14:56.074Z] Copying: 191/1024 [MB] (191 MBps) [2024-12-05T03:14:57.007Z] Copying: 379/1024 [MB] (188 MBps) [2024-12-05T03:14:57.944Z] Copying: 623/1024 [MB] (244 MBps) [2024-12-05T03:14:58.885Z] Copying: 850/1024 [MB] (227 MBps) [2024-12-05T03:14:59.825Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:30:28.981 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:28.981 Calculate MD5 checksum, iteration 1 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.981 03:14:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:28.981 [2024-12-05 03:14:59.562131] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:28.981 [2024-12-05 03:14:59.562253] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83346 ] 00:30:28.981 [2024-12-05 03:14:59.718720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.981 [2024-12-05 03:14:59.793216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.367  [2024-12-05T03:15:01.781Z] Copying: 743/1024 [MB] (743 MBps) [2024-12-05T03:15:02.040Z] Copying: 1024/1024 [MB] (average 722 MBps) 00:30:31.196 00:30:31.196 03:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:31.196 03:15:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:33.735 Fill FTL, iteration 2 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=18f709a886cb6244ea328319e6f82c3b 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:33.735 03:15:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:33.735 [2024-12-05 03:15:04.147111] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:33.735 [2024-12-05 03:15:04.147223] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83398 ] 00:30:33.735 [2024-12-05 03:15:04.301600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.735 [2024-12-05 03:15:04.377536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.119  [2024-12-05T03:15:06.905Z] Copying: 249/1024 [MB] (249 MBps) [2024-12-05T03:15:07.846Z] Copying: 489/1024 [MB] (240 MBps) [2024-12-05T03:15:08.790Z] Copying: 741/1024 [MB] (252 MBps) [2024-12-05T03:15:09.051Z] Copying: 994/1024 [MB] (253 MBps) [2024-12-05T03:15:09.624Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:30:38.780 00:30:38.780 Calculate MD5 checksum, iteration 2 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:38.780 03:15:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:38.780 [2024-12-05 03:15:09.456392] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:38.780 [2024-12-05 03:15:09.456529] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83457 ] 00:30:38.780 [2024-12-05 03:15:09.615484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.041 [2024-12-05 03:15:09.707403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.427  [2024-12-05T03:15:11.856Z] Copying: 679/1024 [MB] (679 MBps) [2024-12-05T03:15:12.799Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:30:41.956 00:30:41.956 03:15:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:41.956 03:15:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:43.869 03:15:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:43.869 03:15:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7293132af09cb3953beb5f51d3856cb0 00:30:43.869 03:15:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:43.869 03:15:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:43.869 03:15:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:44.130 [2024-12-05 03:15:14.880288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.130 [2024-12-05 03:15:14.880328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:44.130 [2024-12-05 03:15:14.880339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:44.130 [2024-12-05 03:15:14.880345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.130 [2024-12-05 03:15:14.880363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.130 [2024-12-05 03:15:14.880373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:44.130 [2024-12-05 03:15:14.880379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:44.130 [2024-12-05 03:15:14.880386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.130 [2024-12-05 03:15:14.880401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.130 [2024-12-05 03:15:14.880407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:44.130 [2024-12-05 03:15:14.880414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.130 [2024-12-05 03:15:14.880420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.130 [2024-12-05 03:15:14.880469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:30:44.130 true 00:30:44.130 03:15:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:44.391 { 00:30:44.391 "name": "ftl", 00:30:44.391 "properties": [ 00:30:44.391 { 00:30:44.391 "name": "superblock_version", 00:30:44.391 "value": 5, 00:30:44.391 "read-only": true 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "name": "base_device", 00:30:44.391 "bands": [ 00:30:44.391 { 00:30:44.391 "id": 0, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 1, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 2, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 3, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 4, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 5, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 6, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 7, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 8, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 9, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 10, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 11, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 12, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 13, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 14, 00:30:44.391 "state": "FREE", 00:30:44.391 "validity": 0.0 00:30:44.391 }, 00:30:44.391 { 00:30:44.391 "id": 15, 00:30:44.392 "state": "FREE", 00:30:44.392 "validity": 0.0 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "id": 16, 00:30:44.392 "state": "FREE", 00:30:44.392 "validity": 0.0 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "id": 17, 00:30:44.392 "state": "FREE", 00:30:44.392 "validity": 0.0 00:30:44.392 } 00:30:44.392 ], 00:30:44.392 "read-only": true 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "name": "cache_device", 00:30:44.392 "type": "bdev", 00:30:44.392 "chunks": [ 00:30:44.392 { 00:30:44.392 "id": 0, 00:30:44.392 "state": "INACTIVE", 00:30:44.392 "utilization": 0.0 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "id": 1, 00:30:44.392 "state": "CLOSED", 00:30:44.392 "utilization": 1.0 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "id": 2, 00:30:44.392 "state": "CLOSED", 00:30:44.392 "utilization": 1.0 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "id": 3, 00:30:44.392 "state": "OPEN", 00:30:44.392 "utilization": 0.001953125 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "id": 4, 00:30:44.392 "state": "OPEN", 00:30:44.392 "utilization": 0.0 00:30:44.392 } 00:30:44.392 ], 00:30:44.392 "read-only": true 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "name": "verbose_mode", 00:30:44.392 "value": true, 00:30:44.392 "unit": "", 00:30:44.392 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:44.392 }, 00:30:44.392 { 00:30:44.392 "name": "prep_upgrade_on_shutdown", 00:30:44.392 "value": false, 00:30:44.392 "unit": "", 00:30:44.392 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:44.392 } 00:30:44.392 ] 00:30:44.392 } 00:30:44.392 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:44.652 [2024-12-05 03:15:15.280608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.652 [2024-12-05 03:15:15.280645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:44.652 [2024-12-05 03:15:15.280655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.652 [2024-12-05 03:15:15.280662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.652 [2024-12-05 03:15:15.280679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.652 [2024-12-05 03:15:15.280685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:44.652 [2024-12-05 03:15:15.280691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:44.652 [2024-12-05 03:15:15.280697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.652 [2024-12-05 03:15:15.280711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.652 [2024-12-05 03:15:15.280717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:44.652 [2024-12-05 03:15:15.280723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.653 [2024-12-05 03:15:15.280728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.653 [2024-12-05 03:15:15.280771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.154 ms, result 0 00:30:44.653 true 00:30:44.653 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:44.653 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:44.653 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:44.911 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:44.911 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:44.911 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:44.911 [2024-12-05 03:15:15.688923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.911 [2024-12-05 03:15:15.688956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:44.911 [2024-12-05 03:15:15.688964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.911 [2024-12-05 03:15:15.688970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.911 [2024-12-05 03:15:15.688988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.911 [2024-12-05 03:15:15.688994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:44.911 [2024-12-05 03:15:15.688999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.911 [2024-12-05 03:15:15.689005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.911 [2024-12-05 03:15:15.689019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.911 [2024-12-05 03:15:15.689025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:44.911 [2024-12-05 03:15:15.689030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.911 [2024-12-05 03:15:15.689035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.911 [2024-12-05 03:15:15.689087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:30:44.911 true 00:30:44.911 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:45.171 { 00:30:45.171 "name": "ftl", 00:30:45.171 "properties": [ 00:30:45.171 { 00:30:45.171 "name": "superblock_version", 00:30:45.171 "value": 5, 00:30:45.171 "read-only": true 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "name": "base_device", 00:30:45.171 "bands": [ 00:30:45.171 { 00:30:45.171 "id": 0, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 1, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 2, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 3, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 4, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 5, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 6, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 7, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 8, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 9, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 10, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 11, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 12, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 13, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 14, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 15, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 16, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 17, 00:30:45.171 "state": "FREE", 00:30:45.171 "validity": 0.0 00:30:45.171 } 00:30:45.171 ], 00:30:45.171 "read-only": true 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "name": "cache_device", 00:30:45.171 "type": "bdev", 00:30:45.171 "chunks": [ 00:30:45.171 { 00:30:45.171 "id": 0, 00:30:45.171 "state": "INACTIVE", 00:30:45.171 "utilization": 0.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 1, 00:30:45.171 "state": "CLOSED", 00:30:45.171 "utilization": 1.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 2, 00:30:45.171 "state": "CLOSED", 00:30:45.171 "utilization": 1.0 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 3, 00:30:45.171 "state": "OPEN", 00:30:45.171 "utilization": 0.001953125 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "id": 4, 00:30:45.171 "state": "OPEN", 00:30:45.171 "utilization": 0.0 00:30:45.171 } 00:30:45.171 ], 00:30:45.171 "read-only": true 00:30:45.171 }, 00:30:45.171 { 00:30:45.171 "name": "verbose_mode", 00:30:45.171 "value": true, 00:30:45.171 "unit": "", 00:30:45.171 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:45.171 }, 00:30:45.172 { 00:30:45.172 "name": "prep_upgrade_on_shutdown", 00:30:45.172 "value": true, 00:30:45.172 "unit": "", 00:30:45.172 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:45.172 } 00:30:45.172 ] 00:30:45.172 } 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83117 ]] 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83117 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83117 ']' 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83117 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83117 00:30:45.172 killing process with pid 83117 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83117' 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83117 00:30:45.172 03:15:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83117 00:30:45.808 [2024-12-05 03:15:16.461453] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:45.808 [2024-12-05 03:15:16.471356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.808 [2024-12-05 03:15:16.471391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:45.808 [2024-12-05 03:15:16.471401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:45.808 [2024-12-05 03:15:16.471407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.808 [2024-12-05 03:15:16.471424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:45.808 [2024-12-05 03:15:16.473503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.808 [2024-12-05 03:15:16.473527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:45.808 [2024-12-05 03:15:16.473536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.068 ms 00:30:45.808 [2024-12-05 03:15:16.473546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.848 [2024-12-05 03:15:24.837824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.848 [2024-12-05 03:15:24.837870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:55.848 [2024-12-05 03:15:24.837890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8364.229 ms 00:30:55.848 [2024-12-05 03:15:24.837900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.848 [2024-12-05 03:15:24.839094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.848 [2024-12-05 03:15:24.839132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:55.848 [2024-12-05 03:15:24.839144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.178 ms 00:30:55.848 [2024-12-05 03:15:24.839154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.848 [2024-12-05 03:15:24.840058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.848 [2024-12-05 03:15:24.840094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:55.848 [2024-12-05 03:15:24.840107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:30:55.848 [2024-12-05 03:15:24.840123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.848059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.848102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:55.849 [2024-12-05 03:15:24.848114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.899 ms 00:30:55.849 [2024-12-05 03:15:24.848123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.853457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.853489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:55.849 [2024-12-05 03:15:24.853500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.301 ms 00:30:55.849 [2024-12-05 03:15:24.853509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.853582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.853598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:55.849 [2024-12-05 03:15:24.853609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:55.849 [2024-12-05 03:15:24.853619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.860935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.860964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:55.849 [2024-12-05 03:15:24.860974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.299 ms 00:30:55.849 [2024-12-05 03:15:24.860982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.868198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.868226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:55.849 [2024-12-05 03:15:24.868236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.184 ms 00:30:55.849 [2024-12-05 03:15:24.868244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.875500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.875527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:55.849 [2024-12-05 03:15:24.875537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.222 ms 00:30:55.849 [2024-12-05 03:15:24.875545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.882589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.882617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:55.849 [2024-12-05 03:15:24.882627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.983 ms 00:30:55.849 [2024-12-05 03:15:24.882634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.882665] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:55.849 [2024-12-05 03:15:24.882687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:55.849 [2024-12-05 03:15:24.882699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:55.849 [2024-12-05 03:15:24.882710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:55.849 [2024-12-05 03:15:24.882721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:55.849 [2024-12-05 03:15:24.882871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:55.849 [2024-12-05 03:15:24.882881] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 622a0b7e-690c-48b4-a5e7-caff1c5488be 00:30:55.849 [2024-12-05 03:15:24.882891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:55.849 [2024-12-05 03:15:24.882900] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:55.849 [2024-12-05 03:15:24.882910] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:55.849 [2024-12-05 03:15:24.882920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:55.849 [2024-12-05 03:15:24.882932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:55.849 [2024-12-05 03:15:24.882942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:55.849 [2024-12-05 03:15:24.882954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:55.849 [2024-12-05 03:15:24.882962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:55.849 [2024-12-05 03:15:24.882971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:55.849 [2024-12-05 03:15:24.882980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.882990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:55.849 [2024-12-05 03:15:24.883000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.315 ms 00:30:55.849 [2024-12-05 03:15:24.883011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.892941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.892970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:55.849 [2024-12-05 03:15:24.892985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.903 ms 00:30:55.849 [2024-12-05 03:15:24.892993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.893339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.849 [2024-12-05 03:15:24.893360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:55.849 [2024-12-05 03:15:24.893371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:30:55.849 [2024-12-05 03:15:24.893379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.926139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:24.926172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:55.849 [2024-12-05 03:15:24.926183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:24.926191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.926220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:24.926230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:55.849 [2024-12-05 03:15:24.926241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:24.926249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.926321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:24.926332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:55.849 [2024-12-05 03:15:24.926347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:24.926356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.926373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:24.926383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:55.849 [2024-12-05 03:15:24.926393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:24.926403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:24.985157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:24.985194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:55.849 [2024-12-05 03:15:24.985210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:24.985219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:25.033574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:25.033608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:55.849 [2024-12-05 03:15:25.033620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:25.033629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.849 [2024-12-05 03:15:25.033691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.849 [2024-12-05 03:15:25.033702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:55.849 [2024-12-05 03:15:25.033711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.849 [2024-12-05 03:15:25.033726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.850 [2024-12-05 03:15:25.033784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.850 [2024-12-05 03:15:25.033796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:55.850 [2024-12-05 03:15:25.033806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.850 [2024-12-05 03:15:25.033815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.850 [2024-12-05 03:15:25.033913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.850 [2024-12-05 03:15:25.033924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:55.850 [2024-12-05 03:15:25.033934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.850 [2024-12-05 03:15:25.033943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.850 [2024-12-05 03:15:25.033984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.850 [2024-12-05 03:15:25.033996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:55.850 [2024-12-05 03:15:25.034006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.850 [2024-12-05 03:15:25.034016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.850 [2024-12-05 03:15:25.034054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.850 [2024-12-05 03:15:25.034067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:55.850 [2024-12-05 03:15:25.034088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.850 [2024-12-05 03:15:25.034097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.850 [2024-12-05 03:15:25.034147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:55.850 [2024-12-05 03:15:25.034169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:55.850 [2024-12-05 03:15:25.034179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:55.850 [2024-12-05 03:15:25.034189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.850 [2024-12-05 03:15:25.034316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8562.895 ms, result 0 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83644 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83644 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83644 ']' 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.850 03:15:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:56.111 [2024-12-05 03:15:26.761964] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:56.111 [2024-12-05 03:15:26.762096] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83644 ] 00:30:56.111 [2024-12-05 03:15:26.919625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.372 [2024-12-05 03:15:27.004657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.946 [2024-12-05 03:15:27.575627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:56.946 [2024-12-05 03:15:27.575685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:56.946 [2024-12-05 03:15:27.723217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.723265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:56.946 [2024-12-05 03:15:27.723279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:56.946 [2024-12-05 03:15:27.723287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.723339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.723349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:56.946 [2024-12-05 03:15:27.723358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:56.946 [2024-12-05 03:15:27.723365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.723390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:56.946 [2024-12-05 03:15:27.724107] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:56.946 [2024-12-05 03:15:27.724134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.724142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:56.946 [2024-12-05 03:15:27.724151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.753 ms 00:30:56.946 [2024-12-05 03:15:27.724158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.725385] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:56.946 [2024-12-05 03:15:27.738373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.738413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:56.946 [2024-12-05 03:15:27.738430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.989 ms 00:30:56.946 [2024-12-05 03:15:27.738438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.738496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.738505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:56.946 [2024-12-05 03:15:27.738514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:56.946 [2024-12-05 03:15:27.738521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.744650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.744689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:56.946 [2024-12-05 03:15:27.744698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.072 ms 00:30:56.946 [2024-12-05 03:15:27.744706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.744768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.744778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:56.946 [2024-12-05 03:15:27.744786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:56.946 [2024-12-05 03:15:27.744794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.744847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.744859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:56.946 [2024-12-05 03:15:27.744867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:56.946 [2024-12-05 03:15:27.744875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.744898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:56.946 [2024-12-05 03:15:27.748988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.749025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:56.946 [2024-12-05 03:15:27.749034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.094 ms 00:30:56.946 [2024-12-05 03:15:27.749044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.749084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.749093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:56.946 [2024-12-05 03:15:27.749101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:56.946 [2024-12-05 03:15:27.749108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.749142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:56.946 [2024-12-05 03:15:27.749168] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:56.946 [2024-12-05 03:15:27.749205] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:56.946 [2024-12-05 03:15:27.749219] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:56.946 [2024-12-05 03:15:27.749323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:56.946 [2024-12-05 03:15:27.749333] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:56.946 [2024-12-05 03:15:27.749343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:56.946 [2024-12-05 03:15:27.749352] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:56.946 [2024-12-05 03:15:27.749361] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:56.946 [2024-12-05 03:15:27.749372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:56.946 [2024-12-05 03:15:27.749379] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:56.946 [2024-12-05 03:15:27.749386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:56.946 [2024-12-05 03:15:27.749394] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:56.946 [2024-12-05 03:15:27.749410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.749418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:56.946 [2024-12-05 03:15:27.749425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:30:56.946 [2024-12-05 03:15:27.749433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.749518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.946 [2024-12-05 03:15:27.749526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:56.946 [2024-12-05 03:15:27.749536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:56.946 [2024-12-05 03:15:27.749543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.946 [2024-12-05 03:15:27.749644] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:56.946 [2024-12-05 03:15:27.749656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:56.946 [2024-12-05 03:15:27.749664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:56.946 [2024-12-05 03:15:27.749672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.946 [2024-12-05 03:15:27.749680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:56.946 [2024-12-05 03:15:27.749687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:56.946 [2024-12-05 03:15:27.749694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:56.946 [2024-12-05 03:15:27.749701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:56.946 [2024-12-05 03:15:27.749708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:56.946 [2024-12-05 03:15:27.749715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.946 [2024-12-05 03:15:27.749722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:56.947 [2024-12-05 03:15:27.749729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:56.947 [2024-12-05 03:15:27.749735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:56.947 [2024-12-05 03:15:27.749750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:56.947 [2024-12-05 03:15:27.749756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:56.947 [2024-12-05 03:15:27.749770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:56.947 [2024-12-05 03:15:27.749776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:56.947 [2024-12-05 03:15:27.749790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:56.947 [2024-12-05 03:15:27.749797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:56.947 [2024-12-05 03:15:27.749804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:56.947 [2024-12-05 03:15:27.749817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:56.947 [2024-12-05 03:15:27.749824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:56.947 [2024-12-05 03:15:27.749830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:56.947 [2024-12-05 03:15:27.749837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:56.947 [2024-12-05 03:15:27.749844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:56.947 [2024-12-05 03:15:27.749850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:56.947 [2024-12-05 03:15:27.749857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:56.947 [2024-12-05 03:15:27.749863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:56.947 [2024-12-05 03:15:27.749871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:56.947 [2024-12-05 03:15:27.749878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:56.947 [2024-12-05 03:15:27.749884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:56.947 [2024-12-05 03:15:27.749897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:56.947 [2024-12-05 03:15:27.749903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:56.947 [2024-12-05 03:15:27.749917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:56.947 [2024-12-05 03:15:27.749937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:56.947 [2024-12-05 03:15:27.749943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749950] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:56.947 [2024-12-05 03:15:27.749957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:56.947 [2024-12-05 03:15:27.749964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:56.947 [2024-12-05 03:15:27.749971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:56.947 [2024-12-05 03:15:27.749981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:56.947 [2024-12-05 03:15:27.749989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:56.947 [2024-12-05 03:15:27.749995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:56.947 [2024-12-05 03:15:27.750002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:56.947 [2024-12-05 03:15:27.750009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:56.947 [2024-12-05 03:15:27.750016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:56.947 [2024-12-05 03:15:27.750024] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:56.947 [2024-12-05 03:15:27.750033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:56.947 [2024-12-05 03:15:27.750049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:56.947 [2024-12-05 03:15:27.750095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:56.947 [2024-12-05 03:15:27.750103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:56.947 [2024-12-05 03:15:27.750111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:56.947 [2024-12-05 03:15:27.750118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:56.947 [2024-12-05 03:15:27.750171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:56.947 [2024-12-05 03:15:27.750179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:56.947 [2024-12-05 03:15:27.750194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:56.947 [2024-12-05 03:15:27.750201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:56.947 [2024-12-05 03:15:27.750208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:56.947 [2024-12-05 03:15:27.750216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.947 [2024-12-05 03:15:27.750223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:56.947 [2024-12-05 03:15:27.750231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.640 ms 00:30:56.947 [2024-12-05 03:15:27.750238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.947 [2024-12-05 03:15:27.750278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:56.947 [2024-12-05 03:15:27.750288] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:01.157 [2024-12-05 03:15:31.574683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.574776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:01.157 [2024-12-05 03:15:31.574794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3824.388 ms 00:31:01.157 [2024-12-05 03:15:31.574803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.605580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.605645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:01.157 [2024-12-05 03:15:31.605659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.528 ms 00:31:01.157 [2024-12-05 03:15:31.605668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.605757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.605775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:01.157 [2024-12-05 03:15:31.605785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:01.157 [2024-12-05 03:15:31.605794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.640552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.640602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:01.157 [2024-12-05 03:15:31.640618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.722 ms 00:31:01.157 [2024-12-05 03:15:31.640626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.640660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.640669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:01.157 [2024-12-05 03:15:31.640678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:01.157 [2024-12-05 03:15:31.640686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.641302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.641335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:01.157 [2024-12-05 03:15:31.641346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:31:01.157 [2024-12-05 03:15:31.641354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.641430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.641441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:01.157 [2024-12-05 03:15:31.641450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:01.157 [2024-12-05 03:15:31.641458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.658667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.658712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:01.157 [2024-12-05 03:15:31.658723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.183 ms 00:31:01.157 [2024-12-05 03:15:31.658731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.683457] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:01.157 [2024-12-05 03:15:31.683517] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:01.157 [2024-12-05 03:15:31.683533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.683542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:01.157 [2024-12-05 03:15:31.683552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.689 ms 00:31:01.157 [2024-12-05 03:15:31.683560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.698456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.698502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:01.157 [2024-12-05 03:15:31.698514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.838 ms 00:31:01.157 [2024-12-05 03:15:31.698522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.710761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.710809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:01.157 [2024-12-05 03:15:31.710820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.186 ms 00:31:01.157 [2024-12-05 03:15:31.710827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.723311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.723358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:01.157 [2024-12-05 03:15:31.723368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.437 ms 00:31:01.157 [2024-12-05 03:15:31.723376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.724020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.724053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:01.157 [2024-12-05 03:15:31.724063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:31:01.157 [2024-12-05 03:15:31.724091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.790066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.790140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:01.157 [2024-12-05 03:15:31.790154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.954 ms 00:31:01.157 [2024-12-05 03:15:31.790163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.157 [2024-12-05 03:15:31.801662] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:01.157 [2024-12-05 03:15:31.802676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.157 [2024-12-05 03:15:31.802715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:01.157 [2024-12-05 03:15:31.802726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.454 ms 00:31:01.157 [2024-12-05 03:15:31.802734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.802814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.158 [2024-12-05 03:15:31.802827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:01.158 [2024-12-05 03:15:31.802837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:01.158 [2024-12-05 03:15:31.802845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.802902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.158 [2024-12-05 03:15:31.802914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:01.158 [2024-12-05 03:15:31.802923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:01.158 [2024-12-05 03:15:31.802931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.802953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.158 [2024-12-05 03:15:31.802962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:01.158 [2024-12-05 03:15:31.802973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:01.158 [2024-12-05 03:15:31.802981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.803019] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:01.158 [2024-12-05 03:15:31.803029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.158 [2024-12-05 03:15:31.803038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:01.158 [2024-12-05 03:15:31.803046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:01.158 [2024-12-05 03:15:31.803053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.828389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.158 [2024-12-05 03:15:31.828441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:01.158 [2024-12-05 03:15:31.828453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.286 ms 00:31:01.158 [2024-12-05 03:15:31.828461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.828545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.158 [2024-12-05 03:15:31.828555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:01.158 [2024-12-05 03:15:31.828565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:01.158 [2024-12-05 03:15:31.828573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.158 [2024-12-05 03:15:31.830210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4106.446 ms, result 0 00:31:01.158 [2024-12-05 03:15:31.844791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.158 [2024-12-05 03:15:31.860808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:01.158 [2024-12-05 03:15:31.868966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:01.158 03:15:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.158 03:15:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:01.158 03:15:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:01.158 03:15:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:01.158 03:15:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:01.419 [2024-12-05 03:15:32.105035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.419 [2024-12-05 03:15:32.105104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:01.419 [2024-12-05 03:15:32.105123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:01.419 [2024-12-05 03:15:32.105133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.419 [2024-12-05 03:15:32.105159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.419 [2024-12-05 03:15:32.105169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:01.419 [2024-12-05 03:15:32.105179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:01.419 [2024-12-05 03:15:32.105187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.419 [2024-12-05 03:15:32.105208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.419 [2024-12-05 03:15:32.105218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:01.419 [2024-12-05 03:15:32.105227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:01.419 [2024-12-05 03:15:32.105235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.419 [2024-12-05 03:15:32.105299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.260 ms, result 0 00:31:01.419 true 00:31:01.419 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:01.681 { 00:31:01.681 "name": "ftl", 00:31:01.681 "properties": [ 00:31:01.681 { 00:31:01.681 "name": "superblock_version", 00:31:01.681 "value": 5, 00:31:01.681 "read-only": true 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "name": "base_device", 00:31:01.681 "bands": [ 00:31:01.681 { 00:31:01.681 "id": 0, 00:31:01.681 "state": "CLOSED", 00:31:01.681 "validity": 1.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 1, 00:31:01.681 "state": "CLOSED", 00:31:01.681 "validity": 1.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 2, 00:31:01.681 "state": "CLOSED", 00:31:01.681 "validity": 0.007843137254901933 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 3, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 4, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 5, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 6, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 7, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 8, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 9, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 10, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 11, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 12, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 13, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 14, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 15, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 16, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 }, 00:31:01.681 { 00:31:01.681 "id": 17, 00:31:01.681 "state": "FREE", 00:31:01.681 "validity": 0.0 00:31:01.681 } 00:31:01.682 ], 00:31:01.682 "read-only": true 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "name": "cache_device", 00:31:01.682 "type": "bdev", 00:31:01.682 "chunks": [ 00:31:01.682 { 00:31:01.682 "id": 0, 00:31:01.682 "state": "INACTIVE", 00:31:01.682 "utilization": 0.0 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "id": 1, 00:31:01.682 "state": "OPEN", 00:31:01.682 "utilization": 0.0 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "id": 2, 00:31:01.682 "state": "OPEN", 00:31:01.682 "utilization": 0.0 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "id": 3, 00:31:01.682 "state": "FREE", 00:31:01.682 "utilization": 0.0 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "id": 4, 00:31:01.682 "state": "FREE", 00:31:01.682 "utilization": 0.0 00:31:01.682 } 00:31:01.682 ], 00:31:01.682 "read-only": true 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "name": "verbose_mode", 00:31:01.682 "value": true, 00:31:01.682 "unit": "", 00:31:01.682 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:01.682 }, 00:31:01.682 { 00:31:01.682 "name": "prep_upgrade_on_shutdown", 00:31:01.682 "value": false, 00:31:01.682 "unit": "", 00:31:01.682 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:01.682 } 00:31:01.682 ] 00:31:01.682 } 00:31:01.682 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:01.682 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:01.682 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:01.943 Validate MD5 checksum, iteration 1 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:01.943 03:15:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:02.205 [2024-12-05 03:15:32.830579] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:02.205 [2024-12-05 03:15:32.830700] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83724 ] 00:31:02.205 [2024-12-05 03:15:32.991801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.465 [2024-12-05 03:15:33.083743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.852  [2024-12-05T03:15:35.650Z] Copying: 594/1024 [MB] (594 MBps) [2024-12-05T03:15:36.616Z] Copying: 1024/1024 [MB] (average 605 MBps) 00:31:05.772 00:31:05.772 03:15:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:05.772 03:15:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:07.687 Validate MD5 checksum, iteration 2 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=18f709a886cb6244ea328319e6f82c3b 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 18f709a886cb6244ea328319e6f82c3b != \1\8\f\7\0\9\a\8\8\6\c\b\6\2\4\4\e\a\3\2\8\3\1\9\e\6\f\8\2\c\3\b ]] 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:07.687 03:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:07.687 [2024-12-05 03:15:38.515649] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:07.687 [2024-12-05 03:15:38.515761] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83791 ] 00:31:07.948 [2024-12-05 03:15:38.671533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.948 [2024-12-05 03:15:38.746350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.336  [2024-12-05T03:15:40.752Z] Copying: 667/1024 [MB] (667 MBps) [2024-12-05T03:15:46.043Z] Copying: 1024/1024 [MB] (average 651 MBps) 00:31:15.199 00:31:15.199 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:15.199 03:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7293132af09cb3953beb5f51d3856cb0 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7293132af09cb3953beb5f51d3856cb0 != \7\2\9\3\1\3\2\a\f\0\9\c\b\3\9\5\3\b\e\b\5\f\5\1\d\3\8\5\6\c\b\0 ]] 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83644 ]] 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83644 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83890 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83890 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83890 ']' 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.113 03:15:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:17.372 [2024-12-05 03:15:47.960771] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:17.372 [2024-12-05 03:15:47.960890] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83890 ] 00:31:17.372 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83644 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:17.372 [2024-12-05 03:15:48.118974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.372 [2024-12-05 03:15:48.206397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.941 [2024-12-05 03:15:48.779209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:17.941 [2024-12-05 03:15:48.779265] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:18.201 [2024-12-05 03:15:48.921981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.922018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:18.201 [2024-12-05 03:15:48.922028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:18.201 [2024-12-05 03:15:48.922034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.922083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.922091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:18.201 [2024-12-05 03:15:48.922097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:18.201 [2024-12-05 03:15:48.922103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.922121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:18.201 [2024-12-05 03:15:48.922707] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:18.201 [2024-12-05 03:15:48.922850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.922859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:18.201 [2024-12-05 03:15:48.922866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.735 ms 00:31:18.201 [2024-12-05 03:15:48.922872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.923118] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:18.201 [2024-12-05 03:15:48.936177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.936205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:18.201 [2024-12-05 03:15:48.936214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.059 ms 00:31:18.201 [2024-12-05 03:15:48.936221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.942897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.942925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:18.201 [2024-12-05 03:15:48.942932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:18.201 [2024-12-05 03:15:48.942938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.943196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.943206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:18.201 [2024-12-05 03:15:48.943213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:31:18.201 [2024-12-05 03:15:48.943219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.943260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.943268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:18.201 [2024-12-05 03:15:48.943274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:18.201 [2024-12-05 03:15:48.943279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.943299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.943306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:18.201 [2024-12-05 03:15:48.943312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:18.201 [2024-12-05 03:15:48.943317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.943332] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:18.201 [2024-12-05 03:15:48.945637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.945758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:18.201 [2024-12-05 03:15:48.945770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.308 ms 00:31:18.201 [2024-12-05 03:15:48.945776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.945801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.201 [2024-12-05 03:15:48.945808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:18.201 [2024-12-05 03:15:48.945814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:18.201 [2024-12-05 03:15:48.945820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.201 [2024-12-05 03:15:48.945836] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:18.201 [2024-12-05 03:15:48.945851] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:18.201 [2024-12-05 03:15:48.945877] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:18.201 [2024-12-05 03:15:48.945890] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:18.201 [2024-12-05 03:15:48.945968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:18.201 [2024-12-05 03:15:48.945976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:18.202 [2024-12-05 03:15:48.945985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:18.202 [2024-12-05 03:15:48.945993] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:18.202 [2024-12-05 03:15:48.946012] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:18.202 [2024-12-05 03:15:48.946018] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:18.202 [2024-12-05 03:15:48.946023] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:18.202 [2024-12-05 03:15:48.946031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.946037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:18.202 [2024-12-05 03:15:48.946043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:31:18.202 [2024-12-05 03:15:48.946048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.946123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.946130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:18.202 [2024-12-05 03:15:48.946136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:31:18.202 [2024-12-05 03:15:48.946142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.946217] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:18.202 [2024-12-05 03:15:48.946227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:18.202 [2024-12-05 03:15:48.946234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:18.202 [2024-12-05 03:15:48.946251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:18.202 [2024-12-05 03:15:48.946262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:18.202 [2024-12-05 03:15:48.946268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:18.202 [2024-12-05 03:15:48.946274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:18.202 [2024-12-05 03:15:48.946286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:18.202 [2024-12-05 03:15:48.946291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:18.202 [2024-12-05 03:15:48.946301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:18.202 [2024-12-05 03:15:48.946305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:18.202 [2024-12-05 03:15:48.946315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:18.202 [2024-12-05 03:15:48.946320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:18.202 [2024-12-05 03:15:48.946330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:18.202 [2024-12-05 03:15:48.946339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:18.202 [2024-12-05 03:15:48.946349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:18.202 [2024-12-05 03:15:48.946354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:18.202 [2024-12-05 03:15:48.946364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:18.202 [2024-12-05 03:15:48.946369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:18.202 [2024-12-05 03:15:48.946378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:18.202 [2024-12-05 03:15:48.946383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:18.202 [2024-12-05 03:15:48.946393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:18.202 [2024-12-05 03:15:48.946398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:18.202 [2024-12-05 03:15:48.946409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:18.202 [2024-12-05 03:15:48.946424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:18.202 [2024-12-05 03:15:48.946439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:18.202 [2024-12-05 03:15:48.946444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946449] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:18.202 [2024-12-05 03:15:48.946455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:18.202 [2024-12-05 03:15:48.946460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:18.202 [2024-12-05 03:15:48.946471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:18.202 [2024-12-05 03:15:48.946476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:18.202 [2024-12-05 03:15:48.946481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:18.202 [2024-12-05 03:15:48.946486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:18.202 [2024-12-05 03:15:48.946491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:18.202 [2024-12-05 03:15:48.946496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:18.202 [2024-12-05 03:15:48.946503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:18.202 [2024-12-05 03:15:48.946509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:18.202 [2024-12-05 03:15:48.946520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:18.202 [2024-12-05 03:15:48.946536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:18.202 [2024-12-05 03:15:48.946541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:18.202 [2024-12-05 03:15:48.946547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:18.202 [2024-12-05 03:15:48.946552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:18.202 [2024-12-05 03:15:48.946588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:18.202 [2024-12-05 03:15:48.946594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:18.202 [2024-12-05 03:15:48.946609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:18.202 [2024-12-05 03:15:48.946614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:18.202 [2024-12-05 03:15:48.946620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:18.202 [2024-12-05 03:15:48.946625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.946631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:18.202 [2024-12-05 03:15:48.946636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:31:18.202 [2024-12-05 03:15:48.946641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.966278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.966388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:18.202 [2024-12-05 03:15:48.966636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.599 ms 00:31:18.202 [2024-12-05 03:15:48.966673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.966721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.966740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:18.202 [2024-12-05 03:15:48.966756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:18.202 [2024-12-05 03:15:48.966771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.991141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.991241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:18.202 [2024-12-05 03:15:48.991283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.316 ms 00:31:18.202 [2024-12-05 03:15:48.991300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.991337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.991660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:18.202 [2024-12-05 03:15:48.991746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:18.202 [2024-12-05 03:15:48.991772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.991903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.991958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:18.202 [2024-12-05 03:15:48.991997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:18.202 [2024-12-05 03:15:48.992014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:48.992062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:48.992146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:18.202 [2024-12-05 03:15:48.992184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:18.202 [2024-12-05 03:15:48.992198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:49.003717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:49.003812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:18.202 [2024-12-05 03:15:49.003854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.489 ms 00:31:18.202 [2024-12-05 03:15:49.003871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:49.003965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:49.003992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:18.202 [2024-12-05 03:15:49.004008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:18.202 [2024-12-05 03:15:49.004049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:49.032465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:49.032573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:18.202 [2024-12-05 03:15:49.032619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.381 ms 00:31:18.202 [2024-12-05 03:15:49.032637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.202 [2024-12-05 03:15:49.039685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.202 [2024-12-05 03:15:49.039771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:18.202 [2024-12-05 03:15:49.039824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:31:18.202 [2024-12-05 03:15:49.039841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.461 [2024-12-05 03:15:49.084341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.461 [2024-12-05 03:15:49.084452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:18.461 [2024-12-05 03:15:49.084496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.448 ms 00:31:18.461 [2024-12-05 03:15:49.084514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.461 [2024-12-05 03:15:49.084832] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:18.461 [2024-12-05 03:15:49.085014] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:18.461 [2024-12-05 03:15:49.085146] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:18.461 [2024-12-05 03:15:49.085258] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:18.461 [2024-12-05 03:15:49.085304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.461 [2024-12-05 03:15:49.085321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:18.461 [2024-12-05 03:15:49.085338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:31:18.461 [2024-12-05 03:15:49.085352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.461 [2024-12-05 03:15:49.085415] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:18.461 [2024-12-05 03:15:49.085461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.461 [2024-12-05 03:15:49.085566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:18.461 [2024-12-05 03:15:49.085581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:31:18.461 [2024-12-05 03:15:49.085595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.462 [2024-12-05 03:15:49.096976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.462 [2024-12-05 03:15:49.097082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:18.462 [2024-12-05 03:15:49.097129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.354 ms 00:31:18.462 [2024-12-05 03:15:49.097147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.462 [2024-12-05 03:15:49.103682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.462 [2024-12-05 03:15:49.103763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:18.462 [2024-12-05 03:15:49.103801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:18.462 [2024-12-05 03:15:49.103818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.462 [2024-12-05 03:15:49.103896] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:18.462 [2024-12-05 03:15:49.104286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.462 [2024-12-05 03:15:49.104359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:18.462 [2024-12-05 03:15:49.104383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:31:18.462 [2024-12-05 03:15:49.104399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.032 [2024-12-05 03:15:49.667403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.032 [2024-12-05 03:15:49.667550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:19.032 [2024-12-05 03:15:49.667598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 562.360 ms 00:31:19.032 [2024-12-05 03:15:49.667616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.032 [2024-12-05 03:15:49.671049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.032 [2024-12-05 03:15:49.671162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:19.032 [2024-12-05 03:15:49.671212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:31:19.032 [2024-12-05 03:15:49.671229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.032 [2024-12-05 03:15:49.671833] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:19.032 [2024-12-05 03:15:49.671930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.032 [2024-12-05 03:15:49.671973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:19.032 [2024-12-05 03:15:49.671993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.664 ms 00:31:19.032 [2024-12-05 03:15:49.672008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.032 [2024-12-05 03:15:49.672033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.032 [2024-12-05 03:15:49.672040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:19.032 [2024-12-05 03:15:49.672047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:19.032 [2024-12-05 03:15:49.672057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.032 [2024-12-05 03:15:49.672095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 568.196 ms, result 0 00:31:19.032 [2024-12-05 03:15:49.672126] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:19.032 [2024-12-05 03:15:49.672206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.032 [2024-12-05 03:15:49.672214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:19.032 [2024-12-05 03:15:49.672221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:31:19.032 [2024-12-05 03:15:49.672226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.402140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.402265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:19.606 [2024-12-05 03:15:50.402326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 729.152 ms 00:31:19.606 [2024-12-05 03:15:50.402345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.406148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.406246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:19.606 [2024-12-05 03:15:50.406296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.313 ms 00:31:19.606 [2024-12-05 03:15:50.406314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.406889] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:19.606 [2024-12-05 03:15:50.406982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.407024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:19.606 [2024-12-05 03:15:50.407043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:31:19.606 [2024-12-05 03:15:50.407057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.407101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.407189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:19.606 [2024-12-05 03:15:50.407208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:19.606 [2024-12-05 03:15:50.407223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.407265] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 735.131 ms, result 0 00:31:19.606 [2024-12-05 03:15:50.407349] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:19.606 [2024-12-05 03:15:50.407376] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:19.606 [2024-12-05 03:15:50.407401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.407441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:19.606 [2024-12-05 03:15:50.407484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1303.519 ms 00:31:19.606 [2024-12-05 03:15:50.407502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.407558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.407582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:19.606 [2024-12-05 03:15:50.407599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:19.606 [2024-12-05 03:15:50.407614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.416398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:19.606 [2024-12-05 03:15:50.416548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.416571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:19.606 [2024-12-05 03:15:50.416619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.827 ms 00:31:19.606 [2024-12-05 03:15:50.416636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.417194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.417261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:19.606 [2024-12-05 03:15:50.417306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.466 ms 00:31:19.606 [2024-12-05 03:15:50.417324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.606 [2024-12-05 03:15:50.419042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.606 [2024-12-05 03:15:50.419060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:19.606 [2024-12-05 03:15:50.419068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.694 ms 00:31:19.607 [2024-12-05 03:15:50.419086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.607 [2024-12-05 03:15:50.419116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.607 [2024-12-05 03:15:50.419123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:19.607 [2024-12-05 03:15:50.419130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:19.607 [2024-12-05 03:15:50.419138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.607 [2024-12-05 03:15:50.419215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.607 [2024-12-05 03:15:50.419223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:19.607 [2024-12-05 03:15:50.419230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:19.607 [2024-12-05 03:15:50.419235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.607 [2024-12-05 03:15:50.419250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.607 [2024-12-05 03:15:50.419257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:19.607 [2024-12-05 03:15:50.419263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:19.607 [2024-12-05 03:15:50.419269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.607 [2024-12-05 03:15:50.419294] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:19.607 [2024-12-05 03:15:50.419301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.607 [2024-12-05 03:15:50.419307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:19.607 [2024-12-05 03:15:50.419313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:19.607 [2024-12-05 03:15:50.419318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.607 [2024-12-05 03:15:50.419354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.607 [2024-12-05 03:15:50.419361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:19.607 [2024-12-05 03:15:50.419367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:19.607 [2024-12-05 03:15:50.419372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.607 [2024-12-05 03:15:50.420169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1497.833 ms, result 0 00:31:19.607 [2024-12-05 03:15:50.432886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.868 [2024-12-05 03:15:50.448894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:19.868 [2024-12-05 03:15:50.456989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:19.868 Validate MD5 checksum, iteration 1 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:19.868 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:19.869 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:19.869 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:19.869 03:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:19.869 [2024-12-05 03:15:50.558796] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:19.869 [2024-12-05 03:15:50.558904] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83922 ] 00:31:20.130 [2024-12-05 03:15:50.719399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.130 [2024-12-05 03:15:50.811807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.516  [2024-12-05T03:15:52.932Z] Copying: 683/1024 [MB] (683 MBps) [2024-12-05T03:15:53.876Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:31:23.032 00:31:23.032 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:23.032 03:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:25.582 Validate MD5 checksum, iteration 2 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=18f709a886cb6244ea328319e6f82c3b 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 18f709a886cb6244ea328319e6f82c3b != \1\8\f\7\0\9\a\8\8\6\c\b\6\2\4\4\e\a\3\2\8\3\1\9\e\6\f\8\2\c\3\b ]] 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:25.582 03:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:25.582 [2024-12-05 03:15:55.984406] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:25.582 [2024-12-05 03:15:55.984658] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83983 ] 00:31:25.582 [2024-12-05 03:15:56.138455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.582 [2024-12-05 03:15:56.213231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.970  [2024-12-05T03:15:58.407Z] Copying: 643/1024 [MB] (643 MBps) [2024-12-05T03:15:59.378Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:31:28.534 00:31:28.534 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:28.534 03:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7293132af09cb3953beb5f51d3856cb0 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7293132af09cb3953beb5f51d3856cb0 != \7\2\9\3\1\3\2\a\f\0\9\c\b\3\9\5\3\b\e\b\5\f\5\1\d\3\8\5\6\c\b\0 ]] 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:29.914 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83890 ]] 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83890 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83890 ']' 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83890 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83890 00:31:30.173 killing process with pid 83890 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83890' 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83890 00:31:30.173 03:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83890 00:31:30.741 [2024-12-05 03:16:01.358308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:30.741 [2024-12-05 03:16:01.368402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.368440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:30.741 [2024-12-05 03:16:01.368453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:30.741 [2024-12-05 03:16:01.368460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.741 [2024-12-05 03:16:01.368480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:30.741 [2024-12-05 03:16:01.370715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.370742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:30.741 [2024-12-05 03:16:01.370755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.224 ms 00:31:30.741 [2024-12-05 03:16:01.370762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.741 [2024-12-05 03:16:01.370976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.370993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:30.741 [2024-12-05 03:16:01.371001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:31:30.741 [2024-12-05 03:16:01.371007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.741 [2024-12-05 03:16:01.372505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.372531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:30.741 [2024-12-05 03:16:01.372539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.485 ms 00:31:30.741 [2024-12-05 03:16:01.372550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.741 [2024-12-05 03:16:01.373423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.373451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:30.741 [2024-12-05 03:16:01.373460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:31:30.741 [2024-12-05 03:16:01.373467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.741 [2024-12-05 03:16:01.380866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.380893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:30.741 [2024-12-05 03:16:01.380902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.367 ms 00:31:30.741 [2024-12-05 03:16:01.380912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.741 [2024-12-05 03:16:01.385214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.741 [2024-12-05 03:16:01.385242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:30.741 [2024-12-05 03:16:01.385251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.272 ms 00:31:30.742 [2024-12-05 03:16:01.385258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.385325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.385333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:30.742 [2024-12-05 03:16:01.385341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:30.742 [2024-12-05 03:16:01.385351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.392404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.392440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:30.742 [2024-12-05 03:16:01.392447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.040 ms 00:31:30.742 [2024-12-05 03:16:01.392453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.399693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.399829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:30.742 [2024-12-05 03:16:01.399842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.213 ms 00:31:30.742 [2024-12-05 03:16:01.399848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.406823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.406926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:30.742 [2024-12-05 03:16:01.406938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.949 ms 00:31:30.742 [2024-12-05 03:16:01.406944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.413991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.414120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:30.742 [2024-12-05 03:16:01.414132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.991 ms 00:31:30.742 [2024-12-05 03:16:01.414138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.414165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:30.742 [2024-12-05 03:16:01.414178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:30.742 [2024-12-05 03:16:01.414186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:30.742 [2024-12-05 03:16:01.414192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:30.742 [2024-12-05 03:16:01.414198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:30.742 [2024-12-05 03:16:01.414286] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:30.742 [2024-12-05 03:16:01.414292] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 622a0b7e-690c-48b4-a5e7-caff1c5488be 00:31:30.742 [2024-12-05 03:16:01.414298] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:30.742 [2024-12-05 03:16:01.414304] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:30.742 [2024-12-05 03:16:01.414309] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:30.742 [2024-12-05 03:16:01.414315] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:30.742 [2024-12-05 03:16:01.414320] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:30.742 [2024-12-05 03:16:01.414326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:30.742 [2024-12-05 03:16:01.414336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:30.742 [2024-12-05 03:16:01.414341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:30.742 [2024-12-05 03:16:01.414346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:30.742 [2024-12-05 03:16:01.414351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.414359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:30.742 [2024-12-05 03:16:01.414366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:31:30.742 [2024-12-05 03:16:01.414372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.424155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.424180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:30.742 [2024-12-05 03:16:01.424188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.768 ms 00:31:30.742 [2024-12-05 03:16:01.424195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.424488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.742 [2024-12-05 03:16:01.424501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:30.742 [2024-12-05 03:16:01.424508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:31:30.742 [2024-12-05 03:16:01.424514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.459232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.742 [2024-12-05 03:16:01.459260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:30.742 [2024-12-05 03:16:01.459269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.742 [2024-12-05 03:16:01.459275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.459305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.742 [2024-12-05 03:16:01.459312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:30.742 [2024-12-05 03:16:01.459318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.742 [2024-12-05 03:16:01.459324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.459394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.742 [2024-12-05 03:16:01.459402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:30.742 [2024-12-05 03:16:01.459410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.742 [2024-12-05 03:16:01.459416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.459433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.742 [2024-12-05 03:16:01.459440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:30.742 [2024-12-05 03:16:01.459447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.742 [2024-12-05 03:16:01.459453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.522433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.742 [2024-12-05 03:16:01.522470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:30.742 [2024-12-05 03:16:01.522480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.742 [2024-12-05 03:16:01.522486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.742 [2024-12-05 03:16:01.573429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.573586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:30.743 [2024-12-05 03:16:01.573601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.573608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.573680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.573689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:30.743 [2024-12-05 03:16:01.573696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.573703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.573753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.573772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:30.743 [2024-12-05 03:16:01.573780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.573786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.573868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.573876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:30.743 [2024-12-05 03:16:01.573884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.573890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.573917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.573925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:30.743 [2024-12-05 03:16:01.573934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.573940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.573975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.573982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:30.743 [2024-12-05 03:16:01.573989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.573995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.574033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:30.743 [2024-12-05 03:16:01.574043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:30.743 [2024-12-05 03:16:01.574050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:30.743 [2024-12-05 03:16:01.574056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.743 [2024-12-05 03:16:01.574183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 205.753 ms, result 0 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:31.688 Remove shared memory files 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83644 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:31.688 ************************************ 00:31:31.688 END TEST ftl_upgrade_shutdown 00:31:31.688 ************************************ 00:31:31.688 00:31:31.688 real 1m20.562s 00:31:31.688 user 1m51.807s 00:31:31.688 sys 0m18.324s 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.688 03:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:31.688 03:16:02 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:31.688 03:16:02 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:31.688 03:16:02 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:31.688 03:16:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.688 03:16:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:31.688 ************************************ 00:31:31.688 START TEST ftl_restore_fast 00:31:31.688 ************************************ 00:31:31.688 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:31.688 * Looking for test storage... 00:31:31.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:31.949 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:31.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.950 --rc genhtml_branch_coverage=1 00:31:31.950 --rc genhtml_function_coverage=1 00:31:31.950 --rc genhtml_legend=1 00:31:31.950 --rc geninfo_all_blocks=1 00:31:31.950 --rc geninfo_unexecuted_blocks=1 00:31:31.950 00:31:31.950 ' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:31.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.950 --rc genhtml_branch_coverage=1 00:31:31.950 --rc genhtml_function_coverage=1 00:31:31.950 --rc genhtml_legend=1 00:31:31.950 --rc geninfo_all_blocks=1 00:31:31.950 --rc geninfo_unexecuted_blocks=1 00:31:31.950 00:31:31.950 ' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:31.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.950 --rc genhtml_branch_coverage=1 00:31:31.950 --rc genhtml_function_coverage=1 00:31:31.950 --rc genhtml_legend=1 00:31:31.950 --rc geninfo_all_blocks=1 00:31:31.950 --rc geninfo_unexecuted_blocks=1 00:31:31.950 00:31:31.950 ' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:31.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.950 --rc genhtml_branch_coverage=1 00:31:31.950 --rc genhtml_function_coverage=1 00:31:31.950 --rc genhtml_legend=1 00:31:31.950 --rc geninfo_all_blocks=1 00:31:31.950 --rc geninfo_unexecuted_blocks=1 00:31:31.950 00:31:31.950 ' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Hp1DKenCcO 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84127 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84127 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84127 ']' 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.950 03:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:31.950 [2024-12-05 03:16:02.698508] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:31.950 [2024-12-05 03:16:02.698617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84127 ] 00:31:32.210 [2024-12-05 03:16:02.855034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.210 [2024-12-05 03:16:02.931779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:32.791 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:33.052 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:33.313 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:33.313 { 00:31:33.313 "name": "nvme0n1", 00:31:33.313 "aliases": [ 00:31:33.313 "3b4a5ffa-fa5a-42a3-b945-b6640df11181" 00:31:33.313 ], 00:31:33.313 "product_name": "NVMe disk", 00:31:33.313 "block_size": 4096, 00:31:33.313 "num_blocks": 1310720, 00:31:33.313 "uuid": "3b4a5ffa-fa5a-42a3-b945-b6640df11181", 00:31:33.313 "numa_id": -1, 00:31:33.313 "assigned_rate_limits": { 00:31:33.313 "rw_ios_per_sec": 0, 00:31:33.313 "rw_mbytes_per_sec": 0, 00:31:33.313 "r_mbytes_per_sec": 0, 00:31:33.313 "w_mbytes_per_sec": 0 00:31:33.313 }, 00:31:33.313 "claimed": true, 00:31:33.313 "claim_type": "read_many_write_one", 00:31:33.313 "zoned": false, 00:31:33.313 "supported_io_types": { 00:31:33.313 "read": true, 00:31:33.313 "write": true, 00:31:33.313 "unmap": true, 00:31:33.313 "flush": true, 00:31:33.313 "reset": true, 00:31:33.313 "nvme_admin": true, 00:31:33.313 "nvme_io": true, 00:31:33.313 "nvme_io_md": false, 00:31:33.313 "write_zeroes": true, 00:31:33.313 "zcopy": false, 00:31:33.313 "get_zone_info": false, 00:31:33.313 "zone_management": false, 00:31:33.313 "zone_append": false, 00:31:33.313 "compare": true, 00:31:33.313 "compare_and_write": false, 00:31:33.313 "abort": true, 00:31:33.313 "seek_hole": false, 00:31:33.313 "seek_data": false, 00:31:33.313 "copy": true, 00:31:33.313 "nvme_iov_md": false 00:31:33.314 }, 00:31:33.314 "driver_specific": { 00:31:33.314 "nvme": [ 00:31:33.314 { 00:31:33.314 "pci_address": "0000:00:11.0", 00:31:33.314 "trid": { 00:31:33.314 "trtype": "PCIe", 00:31:33.314 "traddr": "0000:00:11.0" 00:31:33.314 }, 00:31:33.314 "ctrlr_data": { 00:31:33.314 "cntlid": 0, 00:31:33.314 "vendor_id": "0x1b36", 00:31:33.314 "model_number": "QEMU NVMe Ctrl", 00:31:33.314 "serial_number": "12341", 00:31:33.314 "firmware_revision": "8.0.0", 00:31:33.314 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:33.314 "oacs": { 00:31:33.314 "security": 0, 00:31:33.314 "format": 1, 00:31:33.314 "firmware": 0, 00:31:33.314 "ns_manage": 1 00:31:33.314 }, 00:31:33.314 "multi_ctrlr": false, 00:31:33.314 "ana_reporting": false 00:31:33.314 }, 00:31:33.314 "vs": { 00:31:33.314 "nvme_version": "1.4" 00:31:33.314 }, 00:31:33.314 "ns_data": { 00:31:33.314 "id": 1, 00:31:33.314 "can_share": false 00:31:33.314 } 00:31:33.314 } 00:31:33.314 ], 00:31:33.314 "mp_policy": "active_passive" 00:31:33.314 } 00:31:33.314 } 00:31:33.314 ]' 00:31:33.314 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:33.314 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:33.314 03:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:33.314 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:33.575 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=99de7fa5-6cb9-443c-aa83-1dda50c46886 00:31:33.575 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:33.575 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99de7fa5-6cb9-443c-aa83-1dda50c46886 00:31:33.836 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:33.836 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=a25c2f25-be74-40cb-871a-7a6810986950 00:31:33.836 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a25c2f25-be74-40cb-871a-7a6810986950 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:34.097 03:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.359 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:34.359 { 00:31:34.360 "name": "eb04d350-eb95-494f-9a68-d51503f674cd", 00:31:34.360 "aliases": [ 00:31:34.360 "lvs/nvme0n1p0" 00:31:34.360 ], 00:31:34.360 "product_name": "Logical Volume", 00:31:34.360 "block_size": 4096, 00:31:34.360 "num_blocks": 26476544, 00:31:34.360 "uuid": "eb04d350-eb95-494f-9a68-d51503f674cd", 00:31:34.360 "assigned_rate_limits": { 00:31:34.360 "rw_ios_per_sec": 0, 00:31:34.360 "rw_mbytes_per_sec": 0, 00:31:34.360 "r_mbytes_per_sec": 0, 00:31:34.360 "w_mbytes_per_sec": 0 00:31:34.360 }, 00:31:34.360 "claimed": false, 00:31:34.360 "zoned": false, 00:31:34.360 "supported_io_types": { 00:31:34.360 "read": true, 00:31:34.360 "write": true, 00:31:34.360 "unmap": true, 00:31:34.360 "flush": false, 00:31:34.360 "reset": true, 00:31:34.360 "nvme_admin": false, 00:31:34.360 "nvme_io": false, 00:31:34.360 "nvme_io_md": false, 00:31:34.360 "write_zeroes": true, 00:31:34.360 "zcopy": false, 00:31:34.360 "get_zone_info": false, 00:31:34.360 "zone_management": false, 00:31:34.360 "zone_append": false, 00:31:34.360 "compare": false, 00:31:34.360 "compare_and_write": false, 00:31:34.360 "abort": false, 00:31:34.360 "seek_hole": true, 00:31:34.360 "seek_data": true, 00:31:34.360 "copy": false, 00:31:34.360 "nvme_iov_md": false 00:31:34.360 }, 00:31:34.360 "driver_specific": { 00:31:34.360 "lvol": { 00:31:34.361 "lvol_store_uuid": "a25c2f25-be74-40cb-871a-7a6810986950", 00:31:34.361 "base_bdev": "nvme0n1", 00:31:34.361 "thin_provision": true, 00:31:34.361 "num_allocated_clusters": 0, 00:31:34.361 "snapshot": false, 00:31:34.361 "clone": false, 00:31:34.361 "esnap_clone": false 00:31:34.361 } 00:31:34.361 } 00:31:34.361 } 00:31:34.361 ]' 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:34.361 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:34.631 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb04d350-eb95-494f-9a68-d51503f674cd 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:34.890 { 00:31:34.890 "name": "eb04d350-eb95-494f-9a68-d51503f674cd", 00:31:34.890 "aliases": [ 00:31:34.890 "lvs/nvme0n1p0" 00:31:34.890 ], 00:31:34.890 "product_name": "Logical Volume", 00:31:34.890 "block_size": 4096, 00:31:34.890 "num_blocks": 26476544, 00:31:34.890 "uuid": "eb04d350-eb95-494f-9a68-d51503f674cd", 00:31:34.890 "assigned_rate_limits": { 00:31:34.890 "rw_ios_per_sec": 0, 00:31:34.890 "rw_mbytes_per_sec": 0, 00:31:34.890 "r_mbytes_per_sec": 0, 00:31:34.890 "w_mbytes_per_sec": 0 00:31:34.890 }, 00:31:34.890 "claimed": false, 00:31:34.890 "zoned": false, 00:31:34.890 "supported_io_types": { 00:31:34.890 "read": true, 00:31:34.890 "write": true, 00:31:34.890 "unmap": true, 00:31:34.890 "flush": false, 00:31:34.890 "reset": true, 00:31:34.890 "nvme_admin": false, 00:31:34.890 "nvme_io": false, 00:31:34.890 "nvme_io_md": false, 00:31:34.890 "write_zeroes": true, 00:31:34.890 "zcopy": false, 00:31:34.890 "get_zone_info": false, 00:31:34.890 "zone_management": false, 00:31:34.890 "zone_append": false, 00:31:34.890 "compare": false, 00:31:34.890 "compare_and_write": false, 00:31:34.890 "abort": false, 00:31:34.890 "seek_hole": true, 00:31:34.890 "seek_data": true, 00:31:34.890 "copy": false, 00:31:34.890 "nvme_iov_md": false 00:31:34.890 }, 00:31:34.890 "driver_specific": { 00:31:34.890 "lvol": { 00:31:34.890 "lvol_store_uuid": "a25c2f25-be74-40cb-871a-7a6810986950", 00:31:34.890 "base_bdev": "nvme0n1", 00:31:34.890 "thin_provision": true, 00:31:34.890 "num_allocated_clusters": 0, 00:31:34.890 "snapshot": false, 00:31:34.890 "clone": false, 00:31:34.890 "esnap_clone": false 00:31:34.890 } 00:31:34.890 } 00:31:34.890 } 00:31:34.890 ]' 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:34.890 03:16:05 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size eb04d350-eb95-494f-9a68-d51503f674cd 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=eb04d350-eb95-494f-9a68-d51503f674cd 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eb04d350-eb95-494f-9a68-d51503f674cd 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:35.151 { 00:31:35.151 "name": "eb04d350-eb95-494f-9a68-d51503f674cd", 00:31:35.151 "aliases": [ 00:31:35.151 "lvs/nvme0n1p0" 00:31:35.151 ], 00:31:35.151 "product_name": "Logical Volume", 00:31:35.151 "block_size": 4096, 00:31:35.151 "num_blocks": 26476544, 00:31:35.151 "uuid": "eb04d350-eb95-494f-9a68-d51503f674cd", 00:31:35.151 "assigned_rate_limits": { 00:31:35.151 "rw_ios_per_sec": 0, 00:31:35.151 "rw_mbytes_per_sec": 0, 00:31:35.151 "r_mbytes_per_sec": 0, 00:31:35.151 "w_mbytes_per_sec": 0 00:31:35.151 }, 00:31:35.151 "claimed": false, 00:31:35.151 "zoned": false, 00:31:35.151 "supported_io_types": { 00:31:35.151 "read": true, 00:31:35.151 "write": true, 00:31:35.151 "unmap": true, 00:31:35.151 "flush": false, 00:31:35.151 "reset": true, 00:31:35.151 "nvme_admin": false, 00:31:35.151 "nvme_io": false, 00:31:35.151 "nvme_io_md": false, 00:31:35.151 "write_zeroes": true, 00:31:35.151 "zcopy": false, 00:31:35.151 "get_zone_info": false, 00:31:35.151 "zone_management": false, 00:31:35.151 "zone_append": false, 00:31:35.151 "compare": false, 00:31:35.151 "compare_and_write": false, 00:31:35.151 "abort": false, 00:31:35.151 "seek_hole": true, 00:31:35.151 "seek_data": true, 00:31:35.151 "copy": false, 00:31:35.151 "nvme_iov_md": false 00:31:35.151 }, 00:31:35.151 "driver_specific": { 00:31:35.151 "lvol": { 00:31:35.151 "lvol_store_uuid": "a25c2f25-be74-40cb-871a-7a6810986950", 00:31:35.151 "base_bdev": "nvme0n1", 00:31:35.151 "thin_provision": true, 00:31:35.151 "num_allocated_clusters": 0, 00:31:35.151 "snapshot": false, 00:31:35.151 "clone": false, 00:31:35.151 "esnap_clone": false 00:31:35.151 } 00:31:35.151 } 00:31:35.151 } 00:31:35.151 ]' 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:35.151 03:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d eb04d350-eb95-494f-9a68-d51503f674cd --l2p_dram_limit 10' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:35.413 03:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d eb04d350-eb95-494f-9a68-d51503f674cd --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:35.413 [2024-12-05 03:16:06.191359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.191475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:35.413 [2024-12-05 03:16:06.191494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:35.413 [2024-12-05 03:16:06.191501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.191553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.191561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:35.413 [2024-12-05 03:16:06.191569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:35.413 [2024-12-05 03:16:06.191575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.191594] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:35.413 [2024-12-05 03:16:06.192135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:35.413 [2024-12-05 03:16:06.192152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.192159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:35.413 [2024-12-05 03:16:06.192168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:31:35.413 [2024-12-05 03:16:06.192175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.192199] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 71dee4a2-a84c-41e2-9929-a20823ca6df5 00:31:35.413 [2024-12-05 03:16:06.193189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.193217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:35.413 [2024-12-05 03:16:06.193225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:35.413 [2024-12-05 03:16:06.193234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.198088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.198197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:35.413 [2024-12-05 03:16:06.198209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.820 ms 00:31:35.413 [2024-12-05 03:16:06.198216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.198311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.198321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:35.413 [2024-12-05 03:16:06.198329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:35.413 [2024-12-05 03:16:06.198339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.198377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.198386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:35.413 [2024-12-05 03:16:06.198395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:35.413 [2024-12-05 03:16:06.198402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.198419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:35.413 [2024-12-05 03:16:06.201323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.201417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:35.413 [2024-12-05 03:16:06.201433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.907 ms 00:31:35.413 [2024-12-05 03:16:06.201439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.201486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.413 [2024-12-05 03:16:06.201493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:35.413 [2024-12-05 03:16:06.201501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:35.413 [2024-12-05 03:16:06.201507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.413 [2024-12-05 03:16:06.201520] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:35.413 [2024-12-05 03:16:06.201630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:35.413 [2024-12-05 03:16:06.201643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:35.413 [2024-12-05 03:16:06.201651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:35.413 [2024-12-05 03:16:06.201660] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:35.413 [2024-12-05 03:16:06.201668] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:35.413 [2024-12-05 03:16:06.201675] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:35.413 [2024-12-05 03:16:06.201681] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:35.413 [2024-12-05 03:16:06.201691] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:35.414 [2024-12-05 03:16:06.201697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:35.414 [2024-12-05 03:16:06.201706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.414 [2024-12-05 03:16:06.201716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:35.414 [2024-12-05 03:16:06.201723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:31:35.414 [2024-12-05 03:16:06.201729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.414 [2024-12-05 03:16:06.201795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.414 [2024-12-05 03:16:06.201803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:35.414 [2024-12-05 03:16:06.201810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:35.414 [2024-12-05 03:16:06.201816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.414 [2024-12-05 03:16:06.201896] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:35.414 [2024-12-05 03:16:06.201904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:35.414 [2024-12-05 03:16:06.201911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:35.414 [2024-12-05 03:16:06.201918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.201926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:35.414 [2024-12-05 03:16:06.201931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.201937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:35.414 [2024-12-05 03:16:06.201942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:35.414 [2024-12-05 03:16:06.201949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:35.414 [2024-12-05 03:16:06.201954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:35.414 [2024-12-05 03:16:06.201962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:35.414 [2024-12-05 03:16:06.201967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:35.414 [2024-12-05 03:16:06.201975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:35.414 [2024-12-05 03:16:06.201981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:35.414 [2024-12-05 03:16:06.201988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:35.414 [2024-12-05 03:16:06.201993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:35.414 [2024-12-05 03:16:06.202007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:35.414 [2024-12-05 03:16:06.202026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:35.414 [2024-12-05 03:16:06.202043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:35.414 [2024-12-05 03:16:06.202062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:35.414 [2024-12-05 03:16:06.202097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:35.414 [2024-12-05 03:16:06.202122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:35.414 [2024-12-05 03:16:06.202134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:35.414 [2024-12-05 03:16:06.202139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:35.414 [2024-12-05 03:16:06.202146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:35.414 [2024-12-05 03:16:06.202151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:35.414 [2024-12-05 03:16:06.202158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:35.414 [2024-12-05 03:16:06.202164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:35.414 [2024-12-05 03:16:06.202176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:35.414 [2024-12-05 03:16:06.202183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202188] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:35.414 [2024-12-05 03:16:06.202195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:35.414 [2024-12-05 03:16:06.202200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.414 [2024-12-05 03:16:06.202214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:35.414 [2024-12-05 03:16:06.202222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:35.414 [2024-12-05 03:16:06.202228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:35.414 [2024-12-05 03:16:06.202236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:35.414 [2024-12-05 03:16:06.202241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:35.414 [2024-12-05 03:16:06.202247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:35.414 [2024-12-05 03:16:06.202254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:35.414 [2024-12-05 03:16:06.202265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:35.414 [2024-12-05 03:16:06.202285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:35.414 [2024-12-05 03:16:06.202291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:35.414 [2024-12-05 03:16:06.202297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:35.414 [2024-12-05 03:16:06.202303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:35.414 [2024-12-05 03:16:06.202309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:35.414 [2024-12-05 03:16:06.202315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:35.414 [2024-12-05 03:16:06.202324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:35.414 [2024-12-05 03:16:06.202331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:35.414 [2024-12-05 03:16:06.202339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:35.414 [2024-12-05 03:16:06.202370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:35.414 [2024-12-05 03:16:06.202380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:35.414 [2024-12-05 03:16:06.202393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:35.414 [2024-12-05 03:16:06.202399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:35.414 [2024-12-05 03:16:06.202406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:35.414 [2024-12-05 03:16:06.202412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.414 [2024-12-05 03:16:06.202420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:35.414 [2024-12-05 03:16:06.202425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:31:35.414 [2024-12-05 03:16:06.202432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.414 [2024-12-05 03:16:06.202471] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:35.414 [2024-12-05 03:16:06.202482] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:39.625 [2024-12-05 03:16:10.281977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.282064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:39.625 [2024-12-05 03:16:10.282105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4079.488 ms 00:31:39.625 [2024-12-05 03:16:10.282119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.313423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.313502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:39.625 [2024-12-05 03:16:10.313517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.058 ms 00:31:39.625 [2024-12-05 03:16:10.313528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.313669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.313685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:39.625 [2024-12-05 03:16:10.313695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:39.625 [2024-12-05 03:16:10.313712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.349045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.349121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:39.625 [2024-12-05 03:16:10.349135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.297 ms 00:31:39.625 [2024-12-05 03:16:10.349146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.349181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.349197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:39.625 [2024-12-05 03:16:10.349206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:39.625 [2024-12-05 03:16:10.349225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.349827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.349858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:39.625 [2024-12-05 03:16:10.349869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:31:39.625 [2024-12-05 03:16:10.349880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.349996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.350018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:39.625 [2024-12-05 03:16:10.350031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:39.625 [2024-12-05 03:16:10.350043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.367257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.367500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:39.625 [2024-12-05 03:16:10.367519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 00:31:39.625 [2024-12-05 03:16:10.367531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.625 [2024-12-05 03:16:10.394000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:39.625 [2024-12-05 03:16:10.397815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.625 [2024-12-05 03:16:10.397862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:39.625 [2024-12-05 03:16:10.397879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.190 ms 00:31:39.625 [2024-12-05 03:16:10.397888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.887 [2024-12-05 03:16:10.514509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.887 [2024-12-05 03:16:10.514565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:39.887 [2024-12-05 03:16:10.514584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.569 ms 00:31:39.887 [2024-12-05 03:16:10.514594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.887 [2024-12-05 03:16:10.514805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.887 [2024-12-05 03:16:10.514823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:39.887 [2024-12-05 03:16:10.514837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:31:39.887 [2024-12-05 03:16:10.514847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.887 [2024-12-05 03:16:10.540974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.887 [2024-12-05 03:16:10.541227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:39.887 [2024-12-05 03:16:10.541258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.070 ms 00:31:39.887 [2024-12-05 03:16:10.541268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.887 [2024-12-05 03:16:10.565836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.887 [2024-12-05 03:16:10.565882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:39.887 [2024-12-05 03:16:10.565898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.514 ms 00:31:39.887 [2024-12-05 03:16:10.565907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.887 [2024-12-05 03:16:10.566557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.887 [2024-12-05 03:16:10.566582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:39.887 [2024-12-05 03:16:10.566595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:31:39.887 [2024-12-05 03:16:10.566606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.887 [2024-12-05 03:16:10.660026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.888 [2024-12-05 03:16:10.660097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:39.888 [2024-12-05 03:16:10.660119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.374 ms 00:31:39.888 [2024-12-05 03:16:10.660128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.888 [2024-12-05 03:16:10.687384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.888 [2024-12-05 03:16:10.688057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:39.888 [2024-12-05 03:16:10.688183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.155 ms 00:31:39.888 [2024-12-05 03:16:10.688214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.888 [2024-12-05 03:16:10.723030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.888 [2024-12-05 03:16:10.723236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:39.888 [2024-12-05 03:16:10.723266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.670 ms 00:31:39.888 [2024-12-05 03:16:10.723274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.149 [2024-12-05 03:16:10.749913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.149 [2024-12-05 03:16:10.749965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:40.149 [2024-12-05 03:16:10.749981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.587 ms 00:31:40.149 [2024-12-05 03:16:10.749989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.149 [2024-12-05 03:16:10.750286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.149 [2024-12-05 03:16:10.750333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:40.149 [2024-12-05 03:16:10.750363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:40.149 [2024-12-05 03:16:10.750384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.149 [2024-12-05 03:16:10.750541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.149 [2024-12-05 03:16:10.750559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:40.149 [2024-12-05 03:16:10.750571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:40.149 [2024-12-05 03:16:10.750579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.149 [2024-12-05 03:16:10.751794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4559.916 ms, result 0 00:31:40.149 { 00:31:40.149 "name": "ftl0", 00:31:40.149 "uuid": "71dee4a2-a84c-41e2-9929-a20823ca6df5" 00:31:40.149 } 00:31:40.149 03:16:10 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:40.149 03:16:10 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:40.410 03:16:10 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:40.410 03:16:10 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:40.410 [2024-12-05 03:16:11.186970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.410 [2024-12-05 03:16:11.187211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:40.410 [2024-12-05 03:16:11.187670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:40.410 [2024-12-05 03:16:11.187708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.410 [2024-12-05 03:16:11.187765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.410 [2024-12-05 03:16:11.191017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.410 [2024-12-05 03:16:11.191150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:40.410 [2024-12-05 03:16:11.191225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:31:40.410 [2024-12-05 03:16:11.191251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.410 [2024-12-05 03:16:11.191589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.410 [2024-12-05 03:16:11.191629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:40.410 [2024-12-05 03:16:11.191653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:31:40.410 [2024-12-05 03:16:11.191675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.410 [2024-12-05 03:16:11.195029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.410 [2024-12-05 03:16:11.195149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:40.410 [2024-12-05 03:16:11.195215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:31:40.410 [2024-12-05 03:16:11.195240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.410 [2024-12-05 03:16:11.201399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.411 [2024-12-05 03:16:11.201555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:40.411 [2024-12-05 03:16:11.201632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:31:40.411 [2024-12-05 03:16:11.201655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.411 [2024-12-05 03:16:11.227763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.411 [2024-12-05 03:16:11.227941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:40.411 [2024-12-05 03:16:11.228022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.013 ms 00:31:40.411 [2024-12-05 03:16:11.228046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.411 [2024-12-05 03:16:11.245802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.411 [2024-12-05 03:16:11.245972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:40.411 [2024-12-05 03:16:11.246039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.681 ms 00:31:40.411 [2024-12-05 03:16:11.246063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.411 [2024-12-05 03:16:11.246257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.411 [2024-12-05 03:16:11.246289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:40.411 [2024-12-05 03:16:11.246314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:31:40.411 [2024-12-05 03:16:11.246335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.673 [2024-12-05 03:16:11.272178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.673 [2024-12-05 03:16:11.272342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:40.673 [2024-12-05 03:16:11.272406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.805 ms 00:31:40.673 [2024-12-05 03:16:11.272429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.673 [2024-12-05 03:16:11.298438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.673 [2024-12-05 03:16:11.298612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:40.673 [2024-12-05 03:16:11.298679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.899 ms 00:31:40.673 [2024-12-05 03:16:11.298703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.673 [2024-12-05 03:16:11.323596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.673 [2024-12-05 03:16:11.323762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:40.673 [2024-12-05 03:16:11.323827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.739 ms 00:31:40.673 [2024-12-05 03:16:11.323851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.673 [2024-12-05 03:16:11.356909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.673 [2024-12-05 03:16:11.357106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:40.673 [2024-12-05 03:16:11.357181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.492 ms 00:31:40.673 [2024-12-05 03:16:11.357206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.673 [2024-12-05 03:16:11.357499] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:40.673 [2024-12-05 03:16:11.357591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:40.673 [2024-12-05 03:16:11.358000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:40.673 [2024-12-05 03:16:11.358025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:40.673 [2024-12-05 03:16:11.358036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:40.673 [2024-12-05 03:16:11.358045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:40.673 [2024-12-05 03:16:11.358055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:40.673 [2024-12-05 03:16:11.358063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:40.674 [2024-12-05 03:16:11.358917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:40.675 [2024-12-05 03:16:11.358930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:40.675 [2024-12-05 03:16:11.358939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:40.675 [2024-12-05 03:16:11.358949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:40.675 [2024-12-05 03:16:11.358965] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:40.675 [2024-12-05 03:16:11.358975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71dee4a2-a84c-41e2-9929-a20823ca6df5 00:31:40.675 [2024-12-05 03:16:11.358983] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:40.675 [2024-12-05 03:16:11.358996] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:40.675 [2024-12-05 03:16:11.359006] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:40.675 [2024-12-05 03:16:11.359017] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:40.675 [2024-12-05 03:16:11.359024] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:40.675 [2024-12-05 03:16:11.359034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:40.675 [2024-12-05 03:16:11.359042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:40.675 [2024-12-05 03:16:11.359050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:40.675 [2024-12-05 03:16:11.359057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:40.675 [2024-12-05 03:16:11.359279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.675 [2024-12-05 03:16:11.359314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:40.675 [2024-12-05 03:16:11.359341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:31:40.675 [2024-12-05 03:16:11.359364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.372994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.675 [2024-12-05 03:16:11.373163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:40.675 [2024-12-05 03:16:11.373230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.535 ms 00:31:40.675 [2024-12-05 03:16:11.373253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.373693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.675 [2024-12-05 03:16:11.373848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:40.675 [2024-12-05 03:16:11.373884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:31:40.675 [2024-12-05 03:16:11.373904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.420256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.675 [2024-12-05 03:16:11.420419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.675 [2024-12-05 03:16:11.420482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.675 [2024-12-05 03:16:11.420506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.420591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.675 [2024-12-05 03:16:11.420616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.675 [2024-12-05 03:16:11.420642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.675 [2024-12-05 03:16:11.420662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.420760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.675 [2024-12-05 03:16:11.420787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.675 [2024-12-05 03:16:11.420813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.675 [2024-12-05 03:16:11.421401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.421628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.675 [2024-12-05 03:16:11.421665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.675 [2024-12-05 03:16:11.421701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.675 [2024-12-05 03:16:11.421746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.675 [2024-12-05 03:16:11.513037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.675 [2024-12-05 03:16:11.513266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:40.675 [2024-12-05 03:16:11.513334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.675 [2024-12-05 03:16:11.513359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.581156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.581344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:40.937 [2024-12-05 03:16:11.581407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.581435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.581578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.581609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:40.937 [2024-12-05 03:16:11.581633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.581653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.581722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.581851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:40.937 [2024-12-05 03:16:11.581874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.581894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.582027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.582055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:40.937 [2024-12-05 03:16:11.582105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.582540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.582839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.582864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:40.937 [2024-12-05 03:16:11.582879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.582888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.582939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.582951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:40.937 [2024-12-05 03:16:11.582962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.582970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.583027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.937 [2024-12-05 03:16:11.583039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:40.937 [2024-12-05 03:16:11.583049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.937 [2024-12-05 03:16:11.583057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.937 [2024-12-05 03:16:11.583338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.321 ms, result 0 00:31:40.937 true 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84127 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84127 ']' 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84127 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84127 00:31:40.937 killing process with pid 84127 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84127' 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84127 00:31:40.937 03:16:11 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84127 00:31:47.517 03:16:18 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:51.724 262144+0 records in 00:31:51.724 262144+0 records out 00:31:51.724 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.89158 s, 276 MB/s 00:31:51.724 03:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:53.637 03:16:24 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:53.637 [2024-12-05 03:16:24.222094] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:53.637 [2024-12-05 03:16:24.222189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84375 ] 00:31:53.637 [2024-12-05 03:16:24.379242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.637 [2024-12-05 03:16:24.478171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.208 [2024-12-05 03:16:24.758462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:54.208 [2024-12-05 03:16:24.758545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:54.208 [2024-12-05 03:16:24.918196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.208 [2024-12-05 03:16:24.918244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:54.208 [2024-12-05 03:16:24.918257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:54.208 [2024-12-05 03:16:24.918266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.208 [2024-12-05 03:16:24.918315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.208 [2024-12-05 03:16:24.918327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:54.208 [2024-12-05 03:16:24.918336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:54.208 [2024-12-05 03:16:24.918344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.208 [2024-12-05 03:16:24.918361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:54.208 [2024-12-05 03:16:24.919029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:54.208 [2024-12-05 03:16:24.919045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.208 [2024-12-05 03:16:24.919052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:54.208 [2024-12-05 03:16:24.919061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:31:54.208 [2024-12-05 03:16:24.919068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.208 [2024-12-05 03:16:24.920237] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:54.208 [2024-12-05 03:16:24.933319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.208 [2024-12-05 03:16:24.933356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:54.208 [2024-12-05 03:16:24.933369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.082 ms 00:31:54.209 [2024-12-05 03:16:24.933377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.933438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.933447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:54.209 [2024-12-05 03:16:24.933456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:54.209 [2024-12-05 03:16:24.933472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.939133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.939164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:54.209 [2024-12-05 03:16:24.939176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.595 ms 00:31:54.209 [2024-12-05 03:16:24.939188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.939257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.939267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:54.209 [2024-12-05 03:16:24.939275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:54.209 [2024-12-05 03:16:24.939283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.939319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.939328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:54.209 [2024-12-05 03:16:24.939336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:54.209 [2024-12-05 03:16:24.939344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.939368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:54.209 [2024-12-05 03:16:24.942908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.942939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:54.209 [2024-12-05 03:16:24.942951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.546 ms 00:31:54.209 [2024-12-05 03:16:24.942959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.942990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.942998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:54.209 [2024-12-05 03:16:24.943007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:54.209 [2024-12-05 03:16:24.943014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.943033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:54.209 [2024-12-05 03:16:24.943053] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:54.209 [2024-12-05 03:16:24.943098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:54.209 [2024-12-05 03:16:24.943117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:54.209 [2024-12-05 03:16:24.943221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:54.209 [2024-12-05 03:16:24.943231] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:54.209 [2024-12-05 03:16:24.943241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:54.209 [2024-12-05 03:16:24.943251] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943268] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:54.209 [2024-12-05 03:16:24.943276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:54.209 [2024-12-05 03:16:24.943285] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:54.209 [2024-12-05 03:16:24.943292] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:54.209 [2024-12-05 03:16:24.943300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.943307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:54.209 [2024-12-05 03:16:24.943314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:31:54.209 [2024-12-05 03:16:24.943321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.943403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.209 [2024-12-05 03:16:24.943411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:54.209 [2024-12-05 03:16:24.943418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:54.209 [2024-12-05 03:16:24.943425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.209 [2024-12-05 03:16:24.943545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:54.209 [2024-12-05 03:16:24.943556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:54.209 [2024-12-05 03:16:24.943565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:54.209 [2024-12-05 03:16:24.943587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:54.209 [2024-12-05 03:16:24.943608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:54.209 [2024-12-05 03:16:24.943622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:54.209 [2024-12-05 03:16:24.943628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:54.209 [2024-12-05 03:16:24.943635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:54.209 [2024-12-05 03:16:24.943649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:54.209 [2024-12-05 03:16:24.943656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:54.209 [2024-12-05 03:16:24.943662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:54.209 [2024-12-05 03:16:24.943677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:54.209 [2024-12-05 03:16:24.943696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:54.209 [2024-12-05 03:16:24.943716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:54.209 [2024-12-05 03:16:24.943735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:54.209 [2024-12-05 03:16:24.943754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:54.209 [2024-12-05 03:16:24.943774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:54.209 [2024-12-05 03:16:24.943787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:54.209 [2024-12-05 03:16:24.943793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:54.209 [2024-12-05 03:16:24.943800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:54.209 [2024-12-05 03:16:24.943806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:54.209 [2024-12-05 03:16:24.943812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:54.209 [2024-12-05 03:16:24.943819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:54.209 [2024-12-05 03:16:24.943832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:54.209 [2024-12-05 03:16:24.943839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:54.209 [2024-12-05 03:16:24.943853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:54.209 [2024-12-05 03:16:24.943862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:54.209 [2024-12-05 03:16:24.943869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:54.209 [2024-12-05 03:16:24.943877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:54.209 [2024-12-05 03:16:24.943884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:54.209 [2024-12-05 03:16:24.943891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:54.209 [2024-12-05 03:16:24.943898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:54.209 [2024-12-05 03:16:24.943904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:54.209 [2024-12-05 03:16:24.943911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:54.210 [2024-12-05 03:16:24.943918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:54.210 [2024-12-05 03:16:24.943927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.943938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:54.210 [2024-12-05 03:16:24.943945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:54.210 [2024-12-05 03:16:24.943952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:54.210 [2024-12-05 03:16:24.943959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:54.210 [2024-12-05 03:16:24.943965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:54.210 [2024-12-05 03:16:24.943972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:54.210 [2024-12-05 03:16:24.943979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:54.210 [2024-12-05 03:16:24.943986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:54.210 [2024-12-05 03:16:24.943993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:54.210 [2024-12-05 03:16:24.944000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.944006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.944013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.944020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.944027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:54.210 [2024-12-05 03:16:24.944034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:54.210 [2024-12-05 03:16:24.944042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.944050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:54.210 [2024-12-05 03:16:24.944058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:54.210 [2024-12-05 03:16:24.944064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:54.210 [2024-12-05 03:16:24.944091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:54.210 [2024-12-05 03:16:24.944099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:24.944107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:54.210 [2024-12-05 03:16:24.944116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:31:54.210 [2024-12-05 03:16:24.944123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:24.971218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:24.971254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:54.210 [2024-12-05 03:16:24.971264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.039 ms 00:31:54.210 [2024-12-05 03:16:24.971275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:24.971356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:24.971365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:54.210 [2024-12-05 03:16:24.971373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:54.210 [2024-12-05 03:16:24.971380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:25.014035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:25.014205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:54.210 [2024-12-05 03:16:25.014224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.605 ms 00:31:54.210 [2024-12-05 03:16:25.014234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:25.014277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:25.014287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:54.210 [2024-12-05 03:16:25.014300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:54.210 [2024-12-05 03:16:25.014308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:25.014732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:25.014749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:54.210 [2024-12-05 03:16:25.014759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:31:54.210 [2024-12-05 03:16:25.014767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:25.014893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:25.014902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:54.210 [2024-12-05 03:16:25.014915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:54.210 [2024-12-05 03:16:25.014923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:25.028090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:25.028121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:54.210 [2024-12-05 03:16:25.028131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.146 ms 00:31:54.210 [2024-12-05 03:16:25.028139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.210 [2024-12-05 03:16:25.040883] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:54.210 [2024-12-05 03:16:25.040918] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:54.210 [2024-12-05 03:16:25.040930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.210 [2024-12-05 03:16:25.040937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:54.210 [2024-12-05 03:16:25.040945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:31:54.210 [2024-12-05 03:16:25.040952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.065281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.065323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:54.473 [2024-12-05 03:16:25.065333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:31:54.473 [2024-12-05 03:16:25.065341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.077285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.077314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:54.473 [2024-12-05 03:16:25.077324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.906 ms 00:31:54.473 [2024-12-05 03:16:25.077331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.089231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.089260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:54.473 [2024-12-05 03:16:25.089270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.867 ms 00:31:54.473 [2024-12-05 03:16:25.089277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.089878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.089896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:54.473 [2024-12-05 03:16:25.089904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:31:54.473 [2024-12-05 03:16:25.089914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.146668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.146708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:54.473 [2024-12-05 03:16:25.146720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.739 ms 00:31:54.473 [2024-12-05 03:16:25.146732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.157002] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:54.473 [2024-12-05 03:16:25.159400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.159431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:54.473 [2024-12-05 03:16:25.159442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.631 ms 00:31:54.473 [2024-12-05 03:16:25.159451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.159527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.159538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:54.473 [2024-12-05 03:16:25.159548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:54.473 [2024-12-05 03:16:25.159555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.159619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.159629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:54.473 [2024-12-05 03:16:25.159637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:54.473 [2024-12-05 03:16:25.159644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.159662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.159670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:54.473 [2024-12-05 03:16:25.159678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:54.473 [2024-12-05 03:16:25.159685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.159714] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:54.473 [2024-12-05 03:16:25.159725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.159732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:54.473 [2024-12-05 03:16:25.159740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:54.473 [2024-12-05 03:16:25.159747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.183250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.183370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:54.473 [2024-12-05 03:16:25.183388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.487 ms 00:31:54.473 [2024-12-05 03:16:25.183405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.183466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.473 [2024-12-05 03:16:25.183475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:54.473 [2024-12-05 03:16:25.183483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:54.473 [2024-12-05 03:16:25.183490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.473 [2024-12-05 03:16:25.184373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.760 ms, result 0 00:31:55.415  [2024-12-05T03:16:27.206Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-05T03:16:28.595Z] Copying: 30/1024 [MB] (13 MBps) [2024-12-05T03:16:29.541Z] Copying: 42/1024 [MB] (11 MBps) [2024-12-05T03:16:30.485Z] Copying: 55/1024 [MB] (13 MBps) [2024-12-05T03:16:31.430Z] Copying: 68/1024 [MB] (13 MBps) [2024-12-05T03:16:32.377Z] Copying: 82/1024 [MB] (13 MBps) [2024-12-05T03:16:33.322Z] Copying: 92/1024 [MB] (10 MBps) [2024-12-05T03:16:34.270Z] Copying: 103/1024 [MB] (10 MBps) [2024-12-05T03:16:35.259Z] Copying: 119/1024 [MB] (16 MBps) [2024-12-05T03:16:36.249Z] Copying: 129/1024 [MB] (10 MBps) [2024-12-05T03:16:37.635Z] Copying: 153/1024 [MB] (24 MBps) [2024-12-05T03:16:38.207Z] Copying: 186/1024 [MB] (32 MBps) [2024-12-05T03:16:39.594Z] Copying: 219/1024 [MB] (32 MBps) [2024-12-05T03:16:40.539Z] Copying: 243/1024 [MB] (24 MBps) [2024-12-05T03:16:41.483Z] Copying: 258/1024 [MB] (15 MBps) [2024-12-05T03:16:42.427Z] Copying: 272/1024 [MB] (13 MBps) [2024-12-05T03:16:43.373Z] Copying: 285/1024 [MB] (12 MBps) [2024-12-05T03:16:44.317Z] Copying: 298/1024 [MB] (13 MBps) [2024-12-05T03:16:45.264Z] Copying: 330/1024 [MB] (31 MBps) [2024-12-05T03:16:46.204Z] Copying: 360/1024 [MB] (30 MBps) [2024-12-05T03:16:47.591Z] Copying: 373/1024 [MB] (13 MBps) [2024-12-05T03:16:48.533Z] Copying: 384/1024 [MB] (10 MBps) [2024-12-05T03:16:49.475Z] Copying: 401/1024 [MB] (17 MBps) [2024-12-05T03:16:50.425Z] Copying: 415/1024 [MB] (13 MBps) [2024-12-05T03:16:51.366Z] Copying: 428/1024 [MB] (12 MBps) [2024-12-05T03:16:52.309Z] Copying: 458/1024 [MB] (30 MBps) [2024-12-05T03:16:53.253Z] Copying: 474/1024 [MB] (16 MBps) [2024-12-05T03:16:54.198Z] Copying: 489/1024 [MB] (15 MBps) [2024-12-05T03:16:55.585Z] Copying: 512/1024 [MB] (23 MBps) [2024-12-05T03:16:56.531Z] Copying: 541/1024 [MB] (28 MBps) [2024-12-05T03:16:57.510Z] Copying: 555/1024 [MB] (13 MBps) [2024-12-05T03:16:58.455Z] Copying: 577/1024 [MB] (21 MBps) [2024-12-05T03:16:59.402Z] Copying: 591/1024 [MB] (14 MBps) [2024-12-05T03:17:00.346Z] Copying: 609/1024 [MB] (18 MBps) [2024-12-05T03:17:01.291Z] Copying: 641/1024 [MB] (32 MBps) [2024-12-05T03:17:02.237Z] Copying: 661/1024 [MB] (19 MBps) [2024-12-05T03:17:03.624Z] Copying: 677/1024 [MB] (15 MBps) [2024-12-05T03:17:04.566Z] Copying: 696/1024 [MB] (19 MBps) [2024-12-05T03:17:05.510Z] Copying: 716/1024 [MB] (19 MBps) [2024-12-05T03:17:06.451Z] Copying: 733/1024 [MB] (17 MBps) [2024-12-05T03:17:07.425Z] Copying: 749/1024 [MB] (15 MBps) [2024-12-05T03:17:08.393Z] Copying: 759/1024 [MB] (10 MBps) [2024-12-05T03:17:09.337Z] Copying: 770/1024 [MB] (11 MBps) [2024-12-05T03:17:10.283Z] Copying: 785/1024 [MB] (15 MBps) [2024-12-05T03:17:11.229Z] Copying: 808/1024 [MB] (22 MBps) [2024-12-05T03:17:12.616Z] Copying: 825/1024 [MB] (17 MBps) [2024-12-05T03:17:13.561Z] Copying: 838/1024 [MB] (12 MBps) [2024-12-05T03:17:14.502Z] Copying: 850/1024 [MB] (11 MBps) [2024-12-05T03:17:15.446Z] Copying: 872/1024 [MB] (22 MBps) [2024-12-05T03:17:16.387Z] Copying: 885/1024 [MB] (13 MBps) [2024-12-05T03:17:17.331Z] Copying: 901/1024 [MB] (15 MBps) [2024-12-05T03:17:18.275Z] Copying: 925/1024 [MB] (23 MBps) [2024-12-05T03:17:19.218Z] Copying: 957/1024 [MB] (32 MBps) [2024-12-05T03:17:20.605Z] Copying: 981/1024 [MB] (23 MBps) [2024-12-05T03:17:21.554Z] Copying: 997/1024 [MB] (15 MBps) [2024-12-05T03:17:21.554Z] Copying: 1023/1024 [MB] (26 MBps) [2024-12-05T03:17:21.554Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 03:17:21.208186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.710 [2024-12-05 03:17:21.208289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:50.710 [2024-12-05 03:17:21.208328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:50.710 [2024-12-05 03:17:21.208353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.710 [2024-12-05 03:17:21.208415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:50.710 [2024-12-05 03:17:21.216325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.710 [2024-12-05 03:17:21.216401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:50.710 [2024-12-05 03:17:21.216444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.870 ms 00:32:50.710 [2024-12-05 03:17:21.216466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.710 [2024-12-05 03:17:21.220833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.710 [2024-12-05 03:17:21.220868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:50.710 [2024-12-05 03:17:21.220879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:32:50.710 [2024-12-05 03:17:21.220888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.710 [2024-12-05 03:17:21.220915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.710 [2024-12-05 03:17:21.220925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:50.710 [2024-12-05 03:17:21.220934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:50.710 [2024-12-05 03:17:21.220942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.710 [2024-12-05 03:17:21.220997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.710 [2024-12-05 03:17:21.221007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:50.710 [2024-12-05 03:17:21.221015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:50.710 [2024-12-05 03:17:21.221024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.710 [2024-12-05 03:17:21.221038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:50.710 [2024-12-05 03:17:21.221051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:50.710 [2024-12-05 03:17:21.221402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:50.711 [2024-12-05 03:17:21.221872] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:50.711 [2024-12-05 03:17:21.221942] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71dee4a2-a84c-41e2-9929-a20823ca6df5 00:32:50.711 [2024-12-05 03:17:21.221953] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:50.711 [2024-12-05 03:17:21.221960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:50.711 [2024-12-05 03:17:21.221967] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:50.711 [2024-12-05 03:17:21.221979] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:50.711 [2024-12-05 03:17:21.221986] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:50.711 [2024-12-05 03:17:21.221994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:50.711 [2024-12-05 03:17:21.222001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:50.711 [2024-12-05 03:17:21.222007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:50.711 [2024-12-05 03:17:21.222014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:50.711 [2024-12-05 03:17:21.222022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.711 [2024-12-05 03:17:21.222029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:50.711 [2024-12-05 03:17:21.222038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:32:50.711 [2024-12-05 03:17:21.222045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.235504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.711 [2024-12-05 03:17:21.235619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:50.711 [2024-12-05 03:17:21.235671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.442 ms 00:32:50.711 [2024-12-05 03:17:21.235694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.236064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.711 [2024-12-05 03:17:21.236173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:50.711 [2024-12-05 03:17:21.236338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:32:50.711 [2024-12-05 03:17:21.236372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.272007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.711 [2024-12-05 03:17:21.272140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:50.711 [2024-12-05 03:17:21.272203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.711 [2024-12-05 03:17:21.272227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.272323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.711 [2024-12-05 03:17:21.272348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:50.711 [2024-12-05 03:17:21.272396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.711 [2024-12-05 03:17:21.272419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.272502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.711 [2024-12-05 03:17:21.272563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:50.711 [2024-12-05 03:17:21.272587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.711 [2024-12-05 03:17:21.272636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.272667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.711 [2024-12-05 03:17:21.272688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:50.711 [2024-12-05 03:17:21.272734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.711 [2024-12-05 03:17:21.272755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.711 [2024-12-05 03:17:21.355044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.711 [2024-12-05 03:17:21.355259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:50.712 [2024-12-05 03:17:21.355318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.355342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.427797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.427933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:50.712 [2024-12-05 03:17:21.428001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.428024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.428109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.428134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:50.712 [2024-12-05 03:17:21.428255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.428274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.428376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.428402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:50.712 [2024-12-05 03:17:21.428422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.428492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.428581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.428639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:50.712 [2024-12-05 03:17:21.428702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.428727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.428795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.428822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:50.712 [2024-12-05 03:17:21.428867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.428889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.428938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.428982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:50.712 [2024-12-05 03:17:21.429004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.429057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.429137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.712 [2024-12-05 03:17:21.429163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:50.712 [2024-12-05 03:17:21.429206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.712 [2024-12-05 03:17:21.429253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.712 [2024-12-05 03:17:21.429398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 221.211 ms, result 0 00:32:51.655 00:32:51.655 00:32:51.655 03:17:22 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:51.655 [2024-12-05 03:17:22.299114] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:51.655 [2024-12-05 03:17:22.299464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84961 ] 00:32:51.655 [2024-12-05 03:17:22.456930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.917 [2024-12-05 03:17:22.584302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.178 [2024-12-05 03:17:22.883670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:52.178 [2024-12-05 03:17:22.883963] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:52.442 [2024-12-05 03:17:23.044316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.044380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:52.442 [2024-12-05 03:17:23.044397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.442 [2024-12-05 03:17:23.044408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.044466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.044479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:52.442 [2024-12-05 03:17:23.044489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:52.442 [2024-12-05 03:17:23.044497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.044519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:52.442 [2024-12-05 03:17:23.045337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:52.442 [2024-12-05 03:17:23.045358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.045367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:52.442 [2024-12-05 03:17:23.045377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:32:52.442 [2024-12-05 03:17:23.045386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.045904] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:52.442 [2024-12-05 03:17:23.045958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.045971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:52.442 [2024-12-05 03:17:23.045983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:52.442 [2024-12-05 03:17:23.045991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.046117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.046129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:52.442 [2024-12-05 03:17:23.046137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:52.442 [2024-12-05 03:17:23.046145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.046438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.046450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:52.442 [2024-12-05 03:17:23.046459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:32:52.442 [2024-12-05 03:17:23.046468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.046538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.046549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:52.442 [2024-12-05 03:17:23.046557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:52.442 [2024-12-05 03:17:23.046565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.046589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.046598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:52.442 [2024-12-05 03:17:23.046609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:52.442 [2024-12-05 03:17:23.046617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.046636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:52.442 [2024-12-05 03:17:23.051148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.051185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:52.442 [2024-12-05 03:17:23.051197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.517 ms 00:32:52.442 [2024-12-05 03:17:23.051204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.051252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.442 [2024-12-05 03:17:23.051261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:52.442 [2024-12-05 03:17:23.051270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:52.442 [2024-12-05 03:17:23.051278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.442 [2024-12-05 03:17:23.051336] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:52.443 [2024-12-05 03:17:23.051361] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:52.443 [2024-12-05 03:17:23.051400] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:52.443 [2024-12-05 03:17:23.051416] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:52.443 [2024-12-05 03:17:23.051522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:52.443 [2024-12-05 03:17:23.051533] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:52.443 [2024-12-05 03:17:23.051544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:52.443 [2024-12-05 03:17:23.051555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:52.443 [2024-12-05 03:17:23.051565] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:52.443 [2024-12-05 03:17:23.051578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:52.443 [2024-12-05 03:17:23.051586] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:52.443 [2024-12-05 03:17:23.051595] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:52.443 [2024-12-05 03:17:23.051602] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:52.443 [2024-12-05 03:17:23.051610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.443 [2024-12-05 03:17:23.051618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:52.443 [2024-12-05 03:17:23.051627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:32:52.443 [2024-12-05 03:17:23.051635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.443 [2024-12-05 03:17:23.051717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.443 [2024-12-05 03:17:23.051725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:52.443 [2024-12-05 03:17:23.051734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:52.443 [2024-12-05 03:17:23.051745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.443 [2024-12-05 03:17:23.051849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:52.443 [2024-12-05 03:17:23.051860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:52.443 [2024-12-05 03:17:23.051869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:52.443 [2024-12-05 03:17:23.051877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.051886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:52.443 [2024-12-05 03:17:23.051894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.051901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:52.443 [2024-12-05 03:17:23.051909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:52.443 [2024-12-05 03:17:23.051917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:52.443 [2024-12-05 03:17:23.051924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:52.443 [2024-12-05 03:17:23.051931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:52.443 [2024-12-05 03:17:23.051938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:52.443 [2024-12-05 03:17:23.051944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:52.443 [2024-12-05 03:17:23.051953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:52.443 [2024-12-05 03:17:23.051962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:52.443 [2024-12-05 03:17:23.051975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.051982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:52.443 [2024-12-05 03:17:23.051988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:52.443 [2024-12-05 03:17:23.051995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:52.443 [2024-12-05 03:17:23.052009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.443 [2024-12-05 03:17:23.052022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:52.443 [2024-12-05 03:17:23.052028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.443 [2024-12-05 03:17:23.052041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:52.443 [2024-12-05 03:17:23.052048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.443 [2024-12-05 03:17:23.052061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:52.443 [2024-12-05 03:17:23.052068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.443 [2024-12-05 03:17:23.052116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:52.443 [2024-12-05 03:17:23.052123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:52.443 [2024-12-05 03:17:23.052137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:52.443 [2024-12-05 03:17:23.052144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:52.443 [2024-12-05 03:17:23.052151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:52.443 [2024-12-05 03:17:23.052157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:52.443 [2024-12-05 03:17:23.052164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:52.443 [2024-12-05 03:17:23.052171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:52.443 [2024-12-05 03:17:23.052186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:52.443 [2024-12-05 03:17:23.052193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052200] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:52.443 [2024-12-05 03:17:23.052208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:52.443 [2024-12-05 03:17:23.052218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:52.443 [2024-12-05 03:17:23.052226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.443 [2024-12-05 03:17:23.052237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:52.443 [2024-12-05 03:17:23.052245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:52.443 [2024-12-05 03:17:23.052252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:52.443 [2024-12-05 03:17:23.052259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:52.443 [2024-12-05 03:17:23.052266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:52.443 [2024-12-05 03:17:23.052273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:52.443 [2024-12-05 03:17:23.052281] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:52.443 [2024-12-05 03:17:23.052290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:52.443 [2024-12-05 03:17:23.052307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:52.443 [2024-12-05 03:17:23.052315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:52.443 [2024-12-05 03:17:23.052321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:52.443 [2024-12-05 03:17:23.052329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:52.443 [2024-12-05 03:17:23.052336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:52.443 [2024-12-05 03:17:23.052343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:52.443 [2024-12-05 03:17:23.052350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:52.443 [2024-12-05 03:17:23.052357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:52.443 [2024-12-05 03:17:23.052365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:52.443 [2024-12-05 03:17:23.052401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:52.443 [2024-12-05 03:17:23.052410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:52.443 [2024-12-05 03:17:23.052427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:52.443 [2024-12-05 03:17:23.052435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:52.443 [2024-12-05 03:17:23.052443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:52.443 [2024-12-05 03:17:23.052451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.443 [2024-12-05 03:17:23.052458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:52.444 [2024-12-05 03:17:23.052469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:32:52.444 [2024-12-05 03:17:23.052476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.082454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.082632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:52.444 [2024-12-05 03:17:23.082699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.936 ms 00:32:52.444 [2024-12-05 03:17:23.082723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.082829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.082850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:52.444 [2024-12-05 03:17:23.082879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:52.444 [2024-12-05 03:17:23.082898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.133608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.133806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:52.444 [2024-12-05 03:17:23.133878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.637 ms 00:32:52.444 [2024-12-05 03:17:23.133903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.133973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.133998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:52.444 [2024-12-05 03:17:23.134018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.444 [2024-12-05 03:17:23.134038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.134194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.134358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:52.444 [2024-12-05 03:17:23.134385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:52.444 [2024-12-05 03:17:23.134404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.134566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.134640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:52.444 [2024-12-05 03:17:23.134661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:52.444 [2024-12-05 03:17:23.134680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.150782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.150948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:52.444 [2024-12-05 03:17:23.151008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.905 ms 00:32:52.444 [2024-12-05 03:17:23.151031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.151233] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:52.444 [2024-12-05 03:17:23.151376] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:52.444 [2024-12-05 03:17:23.151589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.151618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:52.444 [2024-12-05 03:17:23.151676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:32:52.444 [2024-12-05 03:17:23.151701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.164009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.164192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:52.444 [2024-12-05 03:17:23.164257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.267 ms 00:32:52.444 [2024-12-05 03:17:23.164280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.164425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.164447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:52.444 [2024-12-05 03:17:23.164466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:52.444 [2024-12-05 03:17:23.164482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.164539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.164549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:52.444 [2024-12-05 03:17:23.164566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:52.444 [2024-12-05 03:17:23.164574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.165177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.165195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:52.444 [2024-12-05 03:17:23.165204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:32:52.444 [2024-12-05 03:17:23.165212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.165235] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:52.444 [2024-12-05 03:17:23.165246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.165254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:52.444 [2024-12-05 03:17:23.165262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:52.444 [2024-12-05 03:17:23.165270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.178117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:52.444 [2024-12-05 03:17:23.178399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.178436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:52.444 [2024-12-05 03:17:23.178519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.108 ms 00:32:52.444 [2024-12-05 03:17:23.178542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.180808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.180938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:52.444 [2024-12-05 03:17:23.180997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:32:52.444 [2024-12-05 03:17:23.181018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.181147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.181229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:52.444 [2024-12-05 03:17:23.181255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:52.444 [2024-12-05 03:17:23.181273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.181343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.181374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:52.444 [2024-12-05 03:17:23.181395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.444 [2024-12-05 03:17:23.181413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.181508] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:52.444 [2024-12-05 03:17:23.181549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.181568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:52.444 [2024-12-05 03:17:23.181588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:52.444 [2024-12-05 03:17:23.181640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.208689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.208866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:52.444 [2024-12-05 03:17:23.208930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.008 ms 00:32:52.444 [2024-12-05 03:17:23.208954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.209474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.444 [2024-12-05 03:17:23.209736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:52.444 [2024-12-05 03:17:23.209971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:52.444 [2024-12-05 03:17:23.210048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.444 [2024-12-05 03:17:23.212842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.300 ms, result 0 00:32:53.831  [2024-12-05T03:17:25.614Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T03:17:26.554Z] Copying: 38/1024 [MB] (19 MBps) [2024-12-05T03:17:27.496Z] Copying: 63/1024 [MB] (25 MBps) [2024-12-05T03:17:28.441Z] Copying: 84/1024 [MB] (21 MBps) [2024-12-05T03:17:29.829Z] Copying: 107/1024 [MB] (22 MBps) [2024-12-05T03:17:30.403Z] Copying: 121/1024 [MB] (14 MBps) [2024-12-05T03:17:31.793Z] Copying: 132/1024 [MB] (11 MBps) [2024-12-05T03:17:32.736Z] Copying: 147/1024 [MB] (15 MBps) [2024-12-05T03:17:33.680Z] Copying: 159/1024 [MB] (11 MBps) [2024-12-05T03:17:34.625Z] Copying: 175/1024 [MB] (15 MBps) [2024-12-05T03:17:35.570Z] Copying: 189/1024 [MB] (13 MBps) [2024-12-05T03:17:36.512Z] Copying: 203/1024 [MB] (14 MBps) [2024-12-05T03:17:37.458Z] Copying: 220/1024 [MB] (17 MBps) [2024-12-05T03:17:38.433Z] Copying: 235/1024 [MB] (15 MBps) [2024-12-05T03:17:39.409Z] Copying: 261/1024 [MB] (26 MBps) [2024-12-05T03:17:40.807Z] Copying: 275/1024 [MB] (13 MBps) [2024-12-05T03:17:41.753Z] Copying: 298/1024 [MB] (22 MBps) [2024-12-05T03:17:42.696Z] Copying: 312/1024 [MB] (14 MBps) [2024-12-05T03:17:43.638Z] Copying: 331/1024 [MB] (18 MBps) [2024-12-05T03:17:44.583Z] Copying: 350/1024 [MB] (19 MBps) [2024-12-05T03:17:45.526Z] Copying: 371/1024 [MB] (20 MBps) [2024-12-05T03:17:46.468Z] Copying: 385/1024 [MB] (14 MBps) [2024-12-05T03:17:47.411Z] Copying: 399/1024 [MB] (14 MBps) [2024-12-05T03:17:48.801Z] Copying: 418/1024 [MB] (19 MBps) [2024-12-05T03:17:49.746Z] Copying: 437/1024 [MB] (18 MBps) [2024-12-05T03:17:50.692Z] Copying: 460/1024 [MB] (23 MBps) [2024-12-05T03:17:51.635Z] Copying: 477/1024 [MB] (16 MBps) [2024-12-05T03:17:52.581Z] Copying: 498/1024 [MB] (21 MBps) [2024-12-05T03:17:53.526Z] Copying: 520/1024 [MB] (22 MBps) [2024-12-05T03:17:54.472Z] Copying: 540/1024 [MB] (19 MBps) [2024-12-05T03:17:55.416Z] Copying: 560/1024 [MB] (20 MBps) [2024-12-05T03:17:56.801Z] Copying: 577/1024 [MB] (16 MBps) [2024-12-05T03:17:57.744Z] Copying: 594/1024 [MB] (16 MBps) [2024-12-05T03:17:58.690Z] Copying: 611/1024 [MB] (17 MBps) [2024-12-05T03:17:59.633Z] Copying: 636/1024 [MB] (24 MBps) [2024-12-05T03:18:00.576Z] Copying: 646/1024 [MB] (10 MBps) [2024-12-05T03:18:01.520Z] Copying: 657/1024 [MB] (10 MBps) [2024-12-05T03:18:02.462Z] Copying: 667/1024 [MB] (10 MBps) [2024-12-05T03:18:03.406Z] Copying: 678/1024 [MB] (10 MBps) [2024-12-05T03:18:04.796Z] Copying: 690/1024 [MB] (11 MBps) [2024-12-05T03:18:05.745Z] Copying: 700/1024 [MB] (10 MBps) [2024-12-05T03:18:06.690Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-05T03:18:07.637Z] Copying: 722/1024 [MB] (11 MBps) [2024-12-05T03:18:08.583Z] Copying: 733/1024 [MB] (10 MBps) [2024-12-05T03:18:09.526Z] Copying: 743/1024 [MB] (10 MBps) [2024-12-05T03:18:10.525Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-05T03:18:11.471Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-05T03:18:12.413Z] Copying: 775/1024 [MB] (10 MBps) [2024-12-05T03:18:13.801Z] Copying: 786/1024 [MB] (10 MBps) [2024-12-05T03:18:14.748Z] Copying: 806/1024 [MB] (20 MBps) [2024-12-05T03:18:15.692Z] Copying: 819/1024 [MB] (12 MBps) [2024-12-05T03:18:16.636Z] Copying: 834/1024 [MB] (15 MBps) [2024-12-05T03:18:17.578Z] Copying: 856/1024 [MB] (22 MBps) [2024-12-05T03:18:18.522Z] Copying: 867/1024 [MB] (10 MBps) [2024-12-05T03:18:19.467Z] Copying: 882/1024 [MB] (15 MBps) [2024-12-05T03:18:20.411Z] Copying: 901/1024 [MB] (18 MBps) [2024-12-05T03:18:21.796Z] Copying: 912/1024 [MB] (10 MBps) [2024-12-05T03:18:22.737Z] Copying: 925/1024 [MB] (13 MBps) [2024-12-05T03:18:23.677Z] Copying: 939/1024 [MB] (13 MBps) [2024-12-05T03:18:24.617Z] Copying: 953/1024 [MB] (14 MBps) [2024-12-05T03:18:25.557Z] Copying: 969/1024 [MB] (16 MBps) [2024-12-05T03:18:26.493Z] Copying: 984/1024 [MB] (14 MBps) [2024-12-05T03:18:27.430Z] Copying: 998/1024 [MB] (13 MBps) [2024-12-05T03:18:28.810Z] Copying: 1010/1024 [MB] (12 MBps) [2024-12-05T03:18:28.810Z] Copying: 1021/1024 [MB] (10 MBps) [2024-12-05T03:18:29.073Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 03:18:28.978007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.229 [2024-12-05 03:18:28.978106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:58.229 [2024-12-05 03:18:28.978124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:58.229 [2024-12-05 03:18:28.978134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.229 [2024-12-05 03:18:28.978165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:58.229 [2024-12-05 03:18:28.981213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.229 [2024-12-05 03:18:28.981246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:58.229 [2024-12-05 03:18:28.981258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:33:58.229 [2024-12-05 03:18:28.981267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.229 [2024-12-05 03:18:28.981513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.229 [2024-12-05 03:18:28.981524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:58.229 [2024-12-05 03:18:28.981534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:33:58.229 [2024-12-05 03:18:28.981542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.229 [2024-12-05 03:18:28.981577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.229 [2024-12-05 03:18:28.981603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:58.229 [2024-12-05 03:18:28.981613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:58.229 [2024-12-05 03:18:28.981622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.229 [2024-12-05 03:18:28.982009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.229 [2024-12-05 03:18:28.982020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:58.229 [2024-12-05 03:18:28.982029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:58.229 [2024-12-05 03:18:28.982037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.229 [2024-12-05 03:18:28.982051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:58.229 [2024-12-05 03:18:28.982064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:58.229 [2024-12-05 03:18:28.982573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:58.230 [2024-12-05 03:18:28.982878] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:58.230 [2024-12-05 03:18:28.982887] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71dee4a2-a84c-41e2-9929-a20823ca6df5 00:33:58.230 [2024-12-05 03:18:28.982895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:58.230 [2024-12-05 03:18:28.982902] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:58.230 [2024-12-05 03:18:28.982911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:58.230 [2024-12-05 03:18:28.982920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:58.230 [2024-12-05 03:18:28.982927] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:58.230 [2024-12-05 03:18:28.982936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:58.230 [2024-12-05 03:18:28.982945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:58.230 [2024-12-05 03:18:28.982952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:58.230 [2024-12-05 03:18:28.982959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:58.230 [2024-12-05 03:18:28.982966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.230 [2024-12-05 03:18:28.982974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:58.230 [2024-12-05 03:18:28.982983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:33:58.230 [2024-12-05 03:18:28.982993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.230 [2024-12-05 03:18:28.997710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.230 [2024-12-05 03:18:28.997746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:58.230 [2024-12-05 03:18:28.997758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.700 ms 00:33:58.230 [2024-12-05 03:18:28.997766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.230 [2024-12-05 03:18:28.998187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.230 [2024-12-05 03:18:28.998198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:58.230 [2024-12-05 03:18:28.998214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:33:58.230 [2024-12-05 03:18:28.998222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.230 [2024-12-05 03:18:29.037490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.230 [2024-12-05 03:18:29.037725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:58.230 [2024-12-05 03:18:29.037748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.230 [2024-12-05 03:18:29.037758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.230 [2024-12-05 03:18:29.037835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.230 [2024-12-05 03:18:29.037845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:58.230 [2024-12-05 03:18:29.037861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.230 [2024-12-05 03:18:29.037870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.230 [2024-12-05 03:18:29.037929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.230 [2024-12-05 03:18:29.037939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:58.230 [2024-12-05 03:18:29.037948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.230 [2024-12-05 03:18:29.037956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.230 [2024-12-05 03:18:29.037974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.230 [2024-12-05 03:18:29.037982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:58.230 [2024-12-05 03:18:29.037991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.230 [2024-12-05 03:18:29.038002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.121166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.121213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:58.492 [2024-12-05 03:18:29.121227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.121235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:58.492 [2024-12-05 03:18:29.189276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:58.492 [2024-12-05 03:18:29.189399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:58.492 [2024-12-05 03:18:29.189467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:58.492 [2024-12-05 03:18:29.189580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:58.492 [2024-12-05 03:18:29.189660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:58.492 [2024-12-05 03:18:29.189747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:58.492 [2024-12-05 03:18:29.189809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:58.492 [2024-12-05 03:18:29.189818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:58.492 [2024-12-05 03:18:29.189826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.492 [2024-12-05 03:18:29.189958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.919 ms, result 0 00:33:59.436 00:33:59.436 00:33:59.436 03:18:29 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:01.349 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:01.349 03:18:32 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:01.349 [2024-12-05 03:18:32.149088] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:01.349 [2024-12-05 03:18:32.149211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85654 ] 00:34:01.610 [2024-12-05 03:18:32.308800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.610 [2024-12-05 03:18:32.409307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.869 [2024-12-05 03:18:32.705997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:01.869 [2024-12-05 03:18:32.706108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:02.130 [2024-12-05 03:18:32.867288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.867527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:02.130 [2024-12-05 03:18:32.867553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:02.130 [2024-12-05 03:18:32.867562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.867636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.867651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:02.130 [2024-12-05 03:18:32.867660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:02.130 [2024-12-05 03:18:32.867669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.867692] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:02.130 [2024-12-05 03:18:32.868478] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:02.130 [2024-12-05 03:18:32.868499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.868508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:02.130 [2024-12-05 03:18:32.868518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:34:02.130 [2024-12-05 03:18:32.868527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.868764] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:02.130 [2024-12-05 03:18:32.868795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.868806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:02.130 [2024-12-05 03:18:32.868816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:02.130 [2024-12-05 03:18:32.868824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.869046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.869095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:02.130 [2024-12-05 03:18:32.869106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:02.130 [2024-12-05 03:18:32.869114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.869570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.869616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:02.130 [2024-12-05 03:18:32.869626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:34:02.130 [2024-12-05 03:18:32.869634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.869707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.869717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:02.130 [2024-12-05 03:18:32.869726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:34:02.130 [2024-12-05 03:18:32.869733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.869756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.869764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:02.130 [2024-12-05 03:18:32.869774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:02.130 [2024-12-05 03:18:32.869782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.869803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:02.130 [2024-12-05 03:18:32.874296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.874450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:02.130 [2024-12-05 03:18:32.874531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:34:02.130 [2024-12-05 03:18:32.874555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.874617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.874639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:02.130 [2024-12-05 03:18:32.874659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:02.130 [2024-12-05 03:18:32.874678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.874749] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:02.130 [2024-12-05 03:18:32.875520] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:02.130 [2024-12-05 03:18:32.875689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:02.130 [2024-12-05 03:18:32.875768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:02.130 [2024-12-05 03:18:32.875902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:02.130 [2024-12-05 03:18:32.875935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:02.130 [2024-12-05 03:18:32.875968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:02.130 [2024-12-05 03:18:32.876000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:02.130 [2024-12-05 03:18:32.876030] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:02.130 [2024-12-05 03:18:32.876126] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:02.130 [2024-12-05 03:18:32.876148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:02.130 [2024-12-05 03:18:32.876167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:02.130 [2024-12-05 03:18:32.876186] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:02.130 [2024-12-05 03:18:32.876208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.876228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:02.130 [2024-12-05 03:18:32.876248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:34:02.130 [2024-12-05 03:18:32.876267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.876391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.130 [2024-12-05 03:18:32.876460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:02.130 [2024-12-05 03:18:32.876484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:34:02.130 [2024-12-05 03:18:32.876508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.130 [2024-12-05 03:18:32.876637] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:02.130 [2024-12-05 03:18:32.876664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:02.130 [2024-12-05 03:18:32.876674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:02.130 [2024-12-05 03:18:32.876682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:02.130 [2024-12-05 03:18:32.876690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:02.130 [2024-12-05 03:18:32.876698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:02.130 [2024-12-05 03:18:32.876706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:02.130 [2024-12-05 03:18:32.876713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:02.130 [2024-12-05 03:18:32.876724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:02.130 [2024-12-05 03:18:32.876731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:02.130 [2024-12-05 03:18:32.876738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:02.130 [2024-12-05 03:18:32.876745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:02.130 [2024-12-05 03:18:32.876752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:02.130 [2024-12-05 03:18:32.876760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:02.130 [2024-12-05 03:18:32.876767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:02.130 [2024-12-05 03:18:32.876781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:02.130 [2024-12-05 03:18:32.876788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:02.130 [2024-12-05 03:18:32.876795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:02.130 [2024-12-05 03:18:32.876802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:02.130 [2024-12-05 03:18:32.876809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:02.130 [2024-12-05 03:18:32.876816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:02.130 [2024-12-05 03:18:32.876823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:02.130 [2024-12-05 03:18:32.876830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:02.131 [2024-12-05 03:18:32.876837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:02.131 [2024-12-05 03:18:32.876844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:02.131 [2024-12-05 03:18:32.876851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:02.131 [2024-12-05 03:18:32.876858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:02.131 [2024-12-05 03:18:32.876865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:02.131 [2024-12-05 03:18:32.876872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:02.131 [2024-12-05 03:18:32.876879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:02.131 [2024-12-05 03:18:32.876886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:02.131 [2024-12-05 03:18:32.876892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:02.131 [2024-12-05 03:18:32.876899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:02.131 [2024-12-05 03:18:32.876906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:02.131 [2024-12-05 03:18:32.876913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:02.131 [2024-12-05 03:18:32.876920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:02.131 [2024-12-05 03:18:32.876927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:02.131 [2024-12-05 03:18:32.876934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:02.131 [2024-12-05 03:18:32.876941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:02.131 [2024-12-05 03:18:32.876948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:02.131 [2024-12-05 03:18:32.876957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:02.131 [2024-12-05 03:18:32.876964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:02.131 [2024-12-05 03:18:32.876971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:02.131 [2024-12-05 03:18:32.876978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:02.131 [2024-12-05 03:18:32.876986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:02.131 [2024-12-05 03:18:32.876993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:02.131 [2024-12-05 03:18:32.877001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:02.131 [2024-12-05 03:18:32.877011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:02.131 [2024-12-05 03:18:32.877019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:02.131 [2024-12-05 03:18:32.877025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:02.131 [2024-12-05 03:18:32.877032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:02.131 [2024-12-05 03:18:32.877039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:02.131 [2024-12-05 03:18:32.877045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:02.131 [2024-12-05 03:18:32.877054] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:02.131 [2024-12-05 03:18:32.877064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:02.131 [2024-12-05 03:18:32.877101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:02.131 [2024-12-05 03:18:32.877108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:02.131 [2024-12-05 03:18:32.877116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:02.131 [2024-12-05 03:18:32.877123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:02.131 [2024-12-05 03:18:32.877130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:02.131 [2024-12-05 03:18:32.877137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:02.131 [2024-12-05 03:18:32.877145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:02.131 [2024-12-05 03:18:32.877152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:02.131 [2024-12-05 03:18:32.877160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:02.131 [2024-12-05 03:18:32.877198] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:02.131 [2024-12-05 03:18:32.877207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:02.131 [2024-12-05 03:18:32.877226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:02.131 [2024-12-05 03:18:32.877234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:02.131 [2024-12-05 03:18:32.877242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:02.131 [2024-12-05 03:18:32.877251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.877258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:02.131 [2024-12-05 03:18:32.877267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:34:02.131 [2024-12-05 03:18:32.877274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.905251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.905297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:02.131 [2024-12-05 03:18:32.905310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.932 ms 00:34:02.131 [2024-12-05 03:18:32.905319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.905410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.905419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:02.131 [2024-12-05 03:18:32.905431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:34:02.131 [2024-12-05 03:18:32.905440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.948190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.948405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:02.131 [2024-12-05 03:18:32.948429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.678 ms 00:34:02.131 [2024-12-05 03:18:32.948439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.948496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.948507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:02.131 [2024-12-05 03:18:32.948518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:02.131 [2024-12-05 03:18:32.948525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.948649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.948661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:02.131 [2024-12-05 03:18:32.948670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:02.131 [2024-12-05 03:18:32.948678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.948815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.948826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:02.131 [2024-12-05 03:18:32.948836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:34:02.131 [2024-12-05 03:18:32.948844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.964843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.964893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:02.131 [2024-12-05 03:18:32.964905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.980 ms 00:34:02.131 [2024-12-05 03:18:32.964913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.131 [2024-12-05 03:18:32.965099] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:02.131 [2024-12-05 03:18:32.965114] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:02.131 [2024-12-05 03:18:32.965128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.131 [2024-12-05 03:18:32.965137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:02.131 [2024-12-05 03:18:32.965146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:34:02.131 [2024-12-05 03:18:32.965154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.977440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.977484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:02.391 [2024-12-05 03:18:32.977496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.265 ms 00:34:02.391 [2024-12-05 03:18:32.977503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.977649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.977658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:02.391 [2024-12-05 03:18:32.977667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:34:02.391 [2024-12-05 03:18:32.977680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.977734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.977744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:02.391 [2024-12-05 03:18:32.977762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:02.391 [2024-12-05 03:18:32.977769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.978372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.978388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:02.391 [2024-12-05 03:18:32.978398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:34:02.391 [2024-12-05 03:18:32.978406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.978429] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:02.391 [2024-12-05 03:18:32.978440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.978448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:02.391 [2024-12-05 03:18:32.978456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:02.391 [2024-12-05 03:18:32.978463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.991095] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:02.391 [2024-12-05 03:18:32.991403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.991421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:02.391 [2024-12-05 03:18:32.991432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.921 ms 00:34:02.391 [2024-12-05 03:18:32.991439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.993690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.993724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:02.391 [2024-12-05 03:18:32.993735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:34:02.391 [2024-12-05 03:18:32.993742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.993841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.993851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:02.391 [2024-12-05 03:18:32.993860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:02.391 [2024-12-05 03:18:32.993868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.993891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.993906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:02.391 [2024-12-05 03:18:32.993914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:02.391 [2024-12-05 03:18:32.993922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:32.993955] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:02.391 [2024-12-05 03:18:32.993965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:32.993973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:02.391 [2024-12-05 03:18:32.993982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:02.391 [2024-12-05 03:18:32.993990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.391 [2024-12-05 03:18:33.020924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.391 [2024-12-05 03:18:33.020977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:02.391 [2024-12-05 03:18:33.020990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.914 ms 00:34:02.392 [2024-12-05 03:18:33.020999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.392 [2024-12-05 03:18:33.021109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.392 [2024-12-05 03:18:33.021122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:02.392 [2024-12-05 03:18:33.021132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:02.392 [2024-12-05 03:18:33.021140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.392 [2024-12-05 03:18:33.022552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.773 ms, result 0 00:34:03.331  [2024-12-05T03:18:35.160Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T03:18:36.097Z] Copying: 35/1024 [MB] (17 MBps) [2024-12-05T03:18:37.036Z] Copying: 50/1024 [MB] (15 MBps) [2024-12-05T03:18:38.423Z] Copying: 81/1024 [MB] (30 MBps) [2024-12-05T03:18:39.366Z] Copying: 113/1024 [MB] (31 MBps) [2024-12-05T03:18:40.309Z] Copying: 141/1024 [MB] (28 MBps) [2024-12-05T03:18:41.266Z] Copying: 163/1024 [MB] (21 MBps) [2024-12-05T03:18:42.301Z] Copying: 180/1024 [MB] (17 MBps) [2024-12-05T03:18:43.244Z] Copying: 204/1024 [MB] (23 MBps) [2024-12-05T03:18:44.217Z] Copying: 226/1024 [MB] (22 MBps) [2024-12-05T03:18:45.162Z] Copying: 247/1024 [MB] (21 MBps) [2024-12-05T03:18:46.102Z] Copying: 263/1024 [MB] (15 MBps) [2024-12-05T03:18:47.042Z] Copying: 277/1024 [MB] (14 MBps) [2024-12-05T03:18:48.427Z] Copying: 289/1024 [MB] (11 MBps) [2024-12-05T03:18:49.369Z] Copying: 304/1024 [MB] (15 MBps) [2024-12-05T03:18:50.312Z] Copying: 322/1024 [MB] (17 MBps) [2024-12-05T03:18:51.258Z] Copying: 334/1024 [MB] (12 MBps) [2024-12-05T03:18:52.202Z] Copying: 346/1024 [MB] (12 MBps) [2024-12-05T03:18:53.144Z] Copying: 360/1024 [MB] (14 MBps) [2024-12-05T03:18:54.087Z] Copying: 372/1024 [MB] (11 MBps) [2024-12-05T03:18:55.471Z] Copying: 390/1024 [MB] (17 MBps) [2024-12-05T03:18:56.040Z] Copying: 400/1024 [MB] (10 MBps) [2024-12-05T03:18:57.424Z] Copying: 418/1024 [MB] (18 MBps) [2024-12-05T03:18:58.369Z] Copying: 449/1024 [MB] (31 MBps) [2024-12-05T03:18:59.315Z] Copying: 467/1024 [MB] (17 MBps) [2024-12-05T03:19:00.260Z] Copying: 478/1024 [MB] (11 MBps) [2024-12-05T03:19:01.203Z] Copying: 505/1024 [MB] (26 MBps) [2024-12-05T03:19:02.149Z] Copying: 516/1024 [MB] (11 MBps) [2024-12-05T03:19:03.095Z] Copying: 528/1024 [MB] (11 MBps) [2024-12-05T03:19:04.041Z] Copying: 539/1024 [MB] (10 MBps) [2024-12-05T03:19:05.428Z] Copying: 552/1024 [MB] (13 MBps) [2024-12-05T03:19:06.368Z] Copying: 565/1024 [MB] (12 MBps) [2024-12-05T03:19:07.311Z] Copying: 578/1024 [MB] (12 MBps) [2024-12-05T03:19:08.255Z] Copying: 588/1024 [MB] (10 MBps) [2024-12-05T03:19:09.199Z] Copying: 601/1024 [MB] (13 MBps) [2024-12-05T03:19:10.145Z] Copying: 614/1024 [MB] (12 MBps) [2024-12-05T03:19:11.090Z] Copying: 631/1024 [MB] (17 MBps) [2024-12-05T03:19:12.034Z] Copying: 644/1024 [MB] (12 MBps) [2024-12-05T03:19:13.443Z] Copying: 659/1024 [MB] (14 MBps) [2024-12-05T03:19:14.392Z] Copying: 678/1024 [MB] (18 MBps) [2024-12-05T03:19:15.335Z] Copying: 691/1024 [MB] (13 MBps) [2024-12-05T03:19:16.275Z] Copying: 708/1024 [MB] (16 MBps) [2024-12-05T03:19:17.223Z] Copying: 722/1024 [MB] (14 MBps) [2024-12-05T03:19:18.168Z] Copying: 732/1024 [MB] (10 MBps) [2024-12-05T03:19:19.112Z] Copying: 745/1024 [MB] (13 MBps) [2024-12-05T03:19:20.057Z] Copying: 759/1024 [MB] (13 MBps) [2024-12-05T03:19:21.447Z] Copying: 774/1024 [MB] (15 MBps) [2024-12-05T03:19:22.390Z] Copying: 788/1024 [MB] (13 MBps) [2024-12-05T03:19:23.334Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-05T03:19:24.279Z] Copying: 817/1024 [MB] (18 MBps) [2024-12-05T03:19:25.225Z] Copying: 832/1024 [MB] (15 MBps) [2024-12-05T03:19:26.175Z] Copying: 847/1024 [MB] (15 MBps) [2024-12-05T03:19:27.119Z] Copying: 862/1024 [MB] (15 MBps) [2024-12-05T03:19:28.065Z] Copying: 875/1024 [MB] (13 MBps) [2024-12-05T03:19:29.453Z] Copying: 887/1024 [MB] (11 MBps) [2024-12-05T03:19:30.398Z] Copying: 911/1024 [MB] (24 MBps) [2024-12-05T03:19:31.340Z] Copying: 929/1024 [MB] (17 MBps) [2024-12-05T03:19:32.282Z] Copying: 958/1024 [MB] (29 MBps) [2024-12-05T03:19:33.225Z] Copying: 972/1024 [MB] (13 MBps) [2024-12-05T03:19:34.167Z] Copying: 987/1024 [MB] (14 MBps) [2024-12-05T03:19:35.113Z] Copying: 1002/1024 [MB] (15 MBps) [2024-12-05T03:19:35.686Z] Copying: 1023/1024 [MB] (20 MBps) [2024-12-05T03:19:35.686Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 03:19:35.517771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.842 [2024-12-05 03:19:35.517820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:04.842 [2024-12-05 03:19:35.517832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:04.842 [2024-12-05 03:19:35.517839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.842 [2024-12-05 03:19:35.519323] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:04.842 [2024-12-05 03:19:35.522721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.842 [2024-12-05 03:19:35.522824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:04.842 [2024-12-05 03:19:35.522836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:35:04.842 [2024-12-05 03:19:35.522843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.842 [2024-12-05 03:19:35.531839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.842 [2024-12-05 03:19:35.531868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:04.842 [2024-12-05 03:19:35.531876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:35:04.842 [2024-12-05 03:19:35.531882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.842 [2024-12-05 03:19:35.531902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.842 [2024-12-05 03:19:35.531909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:04.842 [2024-12-05 03:19:35.531916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:04.842 [2024-12-05 03:19:35.531922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.842 [2024-12-05 03:19:35.531960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.842 [2024-12-05 03:19:35.531968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:04.842 [2024-12-05 03:19:35.531974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:04.842 [2024-12-05 03:19:35.531980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.842 [2024-12-05 03:19:35.531989] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:04.842 [2024-12-05 03:19:35.531998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125952 / 261120 wr_cnt: 1 state: open 00:35:04.842 [2024-12-05 03:19:35.532005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:04.842 [2024-12-05 03:19:35.532166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:04.843 [2024-12-05 03:19:35.532608] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:04.843 [2024-12-05 03:19:35.532614] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71dee4a2-a84c-41e2-9929-a20823ca6df5 00:35:04.843 [2024-12-05 03:19:35.532620] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125952 00:35:04.843 [2024-12-05 03:19:35.532625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125984 00:35:04.843 [2024-12-05 03:19:35.532630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125952 00:35:04.843 [2024-12-05 03:19:35.532637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:35:04.843 [2024-12-05 03:19:35.532644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:04.843 [2024-12-05 03:19:35.532650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:04.843 [2024-12-05 03:19:35.532655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:04.843 [2024-12-05 03:19:35.532660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:04.843 [2024-12-05 03:19:35.532665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:04.843 [2024-12-05 03:19:35.532670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.843 [2024-12-05 03:19:35.532676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:04.843 [2024-12-05 03:19:35.532682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:35:04.843 [2024-12-05 03:19:35.532687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.843 [2024-12-05 03:19:35.542133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.843 [2024-12-05 03:19:35.542157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:04.844 [2024-12-05 03:19:35.542168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.435 ms 00:35:04.844 [2024-12-05 03:19:35.542174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.542438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.844 [2024-12-05 03:19:35.542449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:04.844 [2024-12-05 03:19:35.542455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:35:04.844 [2024-12-05 03:19:35.542461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.568093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.568122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:04.844 [2024-12-05 03:19:35.568130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.568136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.568178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.568185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:04.844 [2024-12-05 03:19:35.568191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.568197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.568235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.568243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:04.844 [2024-12-05 03:19:35.568251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.568257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.568269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.568274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:04.844 [2024-12-05 03:19:35.568280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.568286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.627550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.627588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:04.844 [2024-12-05 03:19:35.627597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.627603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.676661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.676697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:04.844 [2024-12-05 03:19:35.676705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.676711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.676762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.676770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:04.844 [2024-12-05 03:19:35.676776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.676784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.676808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.676814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:04.844 [2024-12-05 03:19:35.676820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.676826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.676878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.676886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:04.844 [2024-12-05 03:19:35.676892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.676898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.676919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.676925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:04.844 [2024-12-05 03:19:35.676931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.676937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.676963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.676970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:04.844 [2024-12-05 03:19:35.676975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.676981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.677014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.844 [2024-12-05 03:19:35.677021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:04.844 [2024-12-05 03:19:35.677027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.844 [2024-12-05 03:19:35.677033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.844 [2024-12-05 03:19:35.677143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 160.337 ms, result 0 00:35:06.230 00:35:06.230 00:35:06.230 03:19:36 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:06.230 [2024-12-05 03:19:37.053892] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:06.230 [2024-12-05 03:19:37.054240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86297 ] 00:35:06.492 [2024-12-05 03:19:37.211115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.492 [2024-12-05 03:19:37.292971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.753 [2024-12-05 03:19:37.502465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:06.753 [2024-12-05 03:19:37.502509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:07.016 [2024-12-05 03:19:37.653541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.653576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:07.016 [2024-12-05 03:19:37.653586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:07.016 [2024-12-05 03:19:37.653593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.653626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.653635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:07.016 [2024-12-05 03:19:37.653655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:07.016 [2024-12-05 03:19:37.653661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.653674] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:07.016 [2024-12-05 03:19:37.654217] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:07.016 [2024-12-05 03:19:37.654229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.654235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:07.016 [2024-12-05 03:19:37.654242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:35:07.016 [2024-12-05 03:19:37.654247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.654429] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:07.016 [2024-12-05 03:19:37.654445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.654454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:07.016 [2024-12-05 03:19:37.654461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:35:07.016 [2024-12-05 03:19:37.654467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.654499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.654506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:07.016 [2024-12-05 03:19:37.654513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:35:07.016 [2024-12-05 03:19:37.654518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.654714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.654728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:07.016 [2024-12-05 03:19:37.654734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:35:07.016 [2024-12-05 03:19:37.654740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.654814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.654822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:07.016 [2024-12-05 03:19:37.654828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:07.016 [2024-12-05 03:19:37.654834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.654850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.654856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:07.016 [2024-12-05 03:19:37.654864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:07.016 [2024-12-05 03:19:37.654870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.654884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:07.016 [2024-12-05 03:19:37.657691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.657714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:07.016 [2024-12-05 03:19:37.657721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:35:07.016 [2024-12-05 03:19:37.657726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.657754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.657761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:07.016 [2024-12-05 03:19:37.657767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:07.016 [2024-12-05 03:19:37.657772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.016 [2024-12-05 03:19:37.657801] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:07.016 [2024-12-05 03:19:37.657817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:07.016 [2024-12-05 03:19:37.657844] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:07.016 [2024-12-05 03:19:37.657855] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:07.016 [2024-12-05 03:19:37.657935] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:07.016 [2024-12-05 03:19:37.657943] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:07.016 [2024-12-05 03:19:37.657950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:07.016 [2024-12-05 03:19:37.657958] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:07.016 [2024-12-05 03:19:37.657965] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:07.016 [2024-12-05 03:19:37.657973] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:07.016 [2024-12-05 03:19:37.657979] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:07.016 [2024-12-05 03:19:37.657984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:07.016 [2024-12-05 03:19:37.657990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:07.016 [2024-12-05 03:19:37.657995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.016 [2024-12-05 03:19:37.658001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:07.017 [2024-12-05 03:19:37.658007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:35:07.017 [2024-12-05 03:19:37.658012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-05 03:19:37.658089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-05 03:19:37.658097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:07.017 [2024-12-05 03:19:37.658102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:35:07.017 [2024-12-05 03:19:37.658110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-05 03:19:37.658185] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:07.017 [2024-12-05 03:19:37.658194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:07.017 [2024-12-05 03:19:37.658200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:07.017 [2024-12-05 03:19:37.658218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:07.017 [2024-12-05 03:19:37.658236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:07.017 [2024-12-05 03:19:37.658247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:07.017 [2024-12-05 03:19:37.658252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:07.017 [2024-12-05 03:19:37.658257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:07.017 [2024-12-05 03:19:37.658263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:07.017 [2024-12-05 03:19:37.658268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:07.017 [2024-12-05 03:19:37.658276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:07.017 [2024-12-05 03:19:37.658287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:07.017 [2024-12-05 03:19:37.658302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:07.017 [2024-12-05 03:19:37.658317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:07.017 [2024-12-05 03:19:37.658332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:07.017 [2024-12-05 03:19:37.658346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:07.017 [2024-12-05 03:19:37.658361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:07.017 [2024-12-05 03:19:37.658371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:07.017 [2024-12-05 03:19:37.658376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:07.017 [2024-12-05 03:19:37.658381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:07.017 [2024-12-05 03:19:37.658386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:07.017 [2024-12-05 03:19:37.658390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:07.017 [2024-12-05 03:19:37.658395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:07.017 [2024-12-05 03:19:37.658406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:07.017 [2024-12-05 03:19:37.658412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658417] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:07.017 [2024-12-05 03:19:37.658423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:07.017 [2024-12-05 03:19:37.658428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:07.017 [2024-12-05 03:19:37.658441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:07.017 [2024-12-05 03:19:37.658446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:07.017 [2024-12-05 03:19:37.658452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:07.017 [2024-12-05 03:19:37.658457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:07.017 [2024-12-05 03:19:37.658462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:07.017 [2024-12-05 03:19:37.658467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:07.017 [2024-12-05 03:19:37.658473] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:07.017 [2024-12-05 03:19:37.658479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:07.017 [2024-12-05 03:19:37.658491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:07.017 [2024-12-05 03:19:37.658496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:07.017 [2024-12-05 03:19:37.658501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:07.017 [2024-12-05 03:19:37.658506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:07.017 [2024-12-05 03:19:37.658512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:07.017 [2024-12-05 03:19:37.658517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:07.017 [2024-12-05 03:19:37.658522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:07.017 [2024-12-05 03:19:37.658528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:07.017 [2024-12-05 03:19:37.658533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:07.017 [2024-12-05 03:19:37.658560] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:07.017 [2024-12-05 03:19:37.658566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:07.017 [2024-12-05 03:19:37.658579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:07.017 [2024-12-05 03:19:37.658584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:07.017 [2024-12-05 03:19:37.658589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:07.017 [2024-12-05 03:19:37.658595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-05 03:19:37.658601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:07.017 [2024-12-05 03:19:37.658607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:35:07.017 [2024-12-05 03:19:37.658612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-05 03:19:37.677293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-05 03:19:37.677318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:07.017 [2024-12-05 03:19:37.677325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.652 ms 00:35:07.017 [2024-12-05 03:19:37.677331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-05 03:19:37.677393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-05 03:19:37.677399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:07.017 [2024-12-05 03:19:37.677407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:35:07.017 [2024-12-05 03:19:37.677413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-05 03:19:37.722164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-05 03:19:37.722194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:07.017 [2024-12-05 03:19:37.722204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.714 ms 00:35:07.017 [2024-12-05 03:19:37.722210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.017 [2024-12-05 03:19:37.722244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.017 [2024-12-05 03:19:37.722252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:07.018 [2024-12-05 03:19:37.722258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:07.018 [2024-12-05 03:19:37.722264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.722335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.722344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:07.018 [2024-12-05 03:19:37.722351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:07.018 [2024-12-05 03:19:37.722356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.722445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.722454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:07.018 [2024-12-05 03:19:37.722460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:35:07.018 [2024-12-05 03:19:37.722465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.732735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.732856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:07.018 [2024-12-05 03:19:37.732869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.256 ms 00:35:07.018 [2024-12-05 03:19:37.732876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.732963] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:07.018 [2024-12-05 03:19:37.732972] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:07.018 [2024-12-05 03:19:37.732980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.732988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:07.018 [2024-12-05 03:19:37.732994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:07.018 [2024-12-05 03:19:37.733000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.742167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.742199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:07.018 [2024-12-05 03:19:37.742207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.155 ms 00:35:07.018 [2024-12-05 03:19:37.742213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.742298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.742304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:07.018 [2024-12-05 03:19:37.742311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:35:07.018 [2024-12-05 03:19:37.742319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.742343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.742350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:07.018 [2024-12-05 03:19:37.742356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:07.018 [2024-12-05 03:19:37.742366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.742783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.742792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:07.018 [2024-12-05 03:19:37.742798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:35:07.018 [2024-12-05 03:19:37.742803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.742817] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:07.018 [2024-12-05 03:19:37.742823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.742829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:07.018 [2024-12-05 03:19:37.742836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:07.018 [2024-12-05 03:19:37.742841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.751403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:07.018 [2024-12-05 03:19:37.751583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.751594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:07.018 [2024-12-05 03:19:37.751602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.728 ms 00:35:07.018 [2024-12-05 03:19:37.751607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.753277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.753298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:07.018 [2024-12-05 03:19:37.753305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:35:07.018 [2024-12-05 03:19:37.753312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.753371] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:07.018 [2024-12-05 03:19:37.753716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.753724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:07.018 [2024-12-05 03:19:37.753731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:35:07.018 [2024-12-05 03:19:37.753736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.753757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.753763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:07.018 [2024-12-05 03:19:37.753769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:07.018 [2024-12-05 03:19:37.753775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.753798] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:07.018 [2024-12-05 03:19:37.753806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.753812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:07.018 [2024-12-05 03:19:37.753818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:07.018 [2024-12-05 03:19:37.753824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.772119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.772145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:07.018 [2024-12-05 03:19:37.772153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.283 ms 00:35:07.018 [2024-12-05 03:19:37.772159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.772208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:07.018 [2024-12-05 03:19:37.772215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:07.018 [2024-12-05 03:19:37.772222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:35:07.018 [2024-12-05 03:19:37.772228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:07.018 [2024-12-05 03:19:37.772930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.076 ms, result 0 00:35:08.406  [2024-12-05T03:19:40.194Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-05T03:19:41.185Z] Copying: 29/1024 [MB] (12 MBps) [2024-12-05T03:19:42.128Z] Copying: 51/1024 [MB] (21 MBps) [2024-12-05T03:19:43.073Z] Copying: 70/1024 [MB] (18 MBps) [2024-12-05T03:19:44.017Z] Copying: 87/1024 [MB] (17 MBps) [2024-12-05T03:19:45.013Z] Copying: 98/1024 [MB] (11 MBps) [2024-12-05T03:19:45.968Z] Copying: 115/1024 [MB] (16 MBps) [2024-12-05T03:19:47.353Z] Copying: 125/1024 [MB] (10 MBps) [2024-12-05T03:19:47.927Z] Copying: 139/1024 [MB] (14 MBps) [2024-12-05T03:19:49.314Z] Copying: 151/1024 [MB] (11 MBps) [2024-12-05T03:19:50.258Z] Copying: 166/1024 [MB] (14 MBps) [2024-12-05T03:19:51.204Z] Copying: 181/1024 [MB] (14 MBps) [2024-12-05T03:19:52.150Z] Copying: 198/1024 [MB] (17 MBps) [2024-12-05T03:19:53.095Z] Copying: 210/1024 [MB] (11 MBps) [2024-12-05T03:19:54.041Z] Copying: 233/1024 [MB] (23 MBps) [2024-12-05T03:19:54.986Z] Copying: 244/1024 [MB] (11 MBps) [2024-12-05T03:19:55.946Z] Copying: 257/1024 [MB] (12 MBps) [2024-12-05T03:19:57.329Z] Copying: 269/1024 [MB] (12 MBps) [2024-12-05T03:19:58.272Z] Copying: 290/1024 [MB] (20 MBps) [2024-12-05T03:19:59.216Z] Copying: 307/1024 [MB] (17 MBps) [2024-12-05T03:20:00.161Z] Copying: 323/1024 [MB] (15 MBps) [2024-12-05T03:20:01.106Z] Copying: 338/1024 [MB] (15 MBps) [2024-12-05T03:20:02.050Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-05T03:20:02.993Z] Copying: 363/1024 [MB] (13 MBps) [2024-12-05T03:20:03.938Z] Copying: 383/1024 [MB] (19 MBps) [2024-12-05T03:20:05.326Z] Copying: 398/1024 [MB] (14 MBps) [2024-12-05T03:20:06.269Z] Copying: 416/1024 [MB] (18 MBps) [2024-12-05T03:20:07.220Z] Copying: 429/1024 [MB] (13 MBps) [2024-12-05T03:20:08.162Z] Copying: 450/1024 [MB] (20 MBps) [2024-12-05T03:20:09.106Z] Copying: 473/1024 [MB] (23 MBps) [2024-12-05T03:20:10.052Z] Copying: 492/1024 [MB] (19 MBps) [2024-12-05T03:20:11.026Z] Copying: 512/1024 [MB] (19 MBps) [2024-12-05T03:20:11.969Z] Copying: 536/1024 [MB] (24 MBps) [2024-12-05T03:20:13.357Z] Copying: 553/1024 [MB] (16 MBps) [2024-12-05T03:20:13.929Z] Copying: 572/1024 [MB] (19 MBps) [2024-12-05T03:20:15.318Z] Copying: 588/1024 [MB] (16 MBps) [2024-12-05T03:20:16.257Z] Copying: 604/1024 [MB] (15 MBps) [2024-12-05T03:20:17.197Z] Copying: 626/1024 [MB] (21 MBps) [2024-12-05T03:20:18.141Z] Copying: 638/1024 [MB] (12 MBps) [2024-12-05T03:20:19.083Z] Copying: 661/1024 [MB] (22 MBps) [2024-12-05T03:20:20.026Z] Copying: 686/1024 [MB] (25 MBps) [2024-12-05T03:20:20.969Z] Copying: 704/1024 [MB] (18 MBps) [2024-12-05T03:20:22.356Z] Copying: 726/1024 [MB] (21 MBps) [2024-12-05T03:20:22.928Z] Copying: 742/1024 [MB] (16 MBps) [2024-12-05T03:20:24.316Z] Copying: 766/1024 [MB] (24 MBps) [2024-12-05T03:20:25.270Z] Copying: 787/1024 [MB] (20 MBps) [2024-12-05T03:20:26.217Z] Copying: 799/1024 [MB] (11 MBps) [2024-12-05T03:20:27.161Z] Copying: 811/1024 [MB] (11 MBps) [2024-12-05T03:20:28.107Z] Copying: 822/1024 [MB] (10 MBps) [2024-12-05T03:20:29.053Z] Copying: 836/1024 [MB] (14 MBps) [2024-12-05T03:20:29.997Z] Copying: 849/1024 [MB] (13 MBps) [2024-12-05T03:20:30.941Z] Copying: 870/1024 [MB] (20 MBps) [2024-12-05T03:20:32.329Z] Copying: 890/1024 [MB] (20 MBps) [2024-12-05T03:20:33.273Z] Copying: 905/1024 [MB] (14 MBps) [2024-12-05T03:20:34.218Z] Copying: 928/1024 [MB] (23 MBps) [2024-12-05T03:20:35.175Z] Copying: 951/1024 [MB] (23 MBps) [2024-12-05T03:20:36.117Z] Copying: 969/1024 [MB] (17 MBps) [2024-12-05T03:20:37.057Z] Copying: 981/1024 [MB] (11 MBps) [2024-12-05T03:20:37.999Z] Copying: 1000/1024 [MB] (19 MBps) [2024-12-05T03:20:38.961Z] Copying: 1015/1024 [MB] (14 MBps) [2024-12-05T03:20:39.258Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 03:20:39.162354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.414 [2024-12-05 03:20:39.162434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:08.414 [2024-12-05 03:20:39.162451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:08.414 [2024-12-05 03:20:39.162461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.414 [2024-12-05 03:20:39.162486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:08.414 [2024-12-05 03:20:39.165764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.414 [2024-12-05 03:20:39.165925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:08.414 [2024-12-05 03:20:39.166001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:36:08.414 [2024-12-05 03:20:39.166038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.414 [2024-12-05 03:20:39.166305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.414 [2024-12-05 03:20:39.166360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:08.414 [2024-12-05 03:20:39.166383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:36:08.414 [2024-12-05 03:20:39.167003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.414 [2024-12-05 03:20:39.167055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.414 [2024-12-05 03:20:39.167066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:08.414 [2024-12-05 03:20:39.167091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:08.414 [2024-12-05 03:20:39.167101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.414 [2024-12-05 03:20:39.167165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.414 [2024-12-05 03:20:39.167179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:08.414 [2024-12-05 03:20:39.167187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:36:08.414 [2024-12-05 03:20:39.167196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.414 [2024-12-05 03:20:39.167211] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:08.414 [2024-12-05 03:20:39.167225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:08.414 [2024-12-05 03:20:39.167237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:08.414 [2024-12-05 03:20:39.167433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.167995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:08.415 [2024-12-05 03:20:39.168717] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:08.415 [2024-12-05 03:20:39.168740] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71dee4a2-a84c-41e2-9929-a20823ca6df5 00:36:08.415 [2024-12-05 03:20:39.168770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:08.415 [2024-12-05 03:20:39.168963] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5152 00:36:08.415 [2024-12-05 03:20:39.168985] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5120 00:36:08.415 [2024-12-05 03:20:39.169013] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0063 00:36:08.415 [2024-12-05 03:20:39.169034] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:08.415 [2024-12-05 03:20:39.169053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:08.415 [2024-12-05 03:20:39.169089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:08.415 [2024-12-05 03:20:39.169109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:08.415 [2024-12-05 03:20:39.169129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:08.415 [2024-12-05 03:20:39.169148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.415 [2024-12-05 03:20:39.169169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:08.415 [2024-12-05 03:20:39.169190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:36:08.415 [2024-12-05 03:20:39.169210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.415 [2024-12-05 03:20:39.183887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.415 [2024-12-05 03:20:39.184028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:08.415 [2024-12-05 03:20:39.184054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.641 ms 00:36:08.416 [2024-12-05 03:20:39.184063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.416 [2024-12-05 03:20:39.184473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:08.416 [2024-12-05 03:20:39.184496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:08.416 [2024-12-05 03:20:39.184507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:36:08.416 [2024-12-05 03:20:39.184515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.416 [2024-12-05 03:20:39.223605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.416 [2024-12-05 03:20:39.223789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:08.416 [2024-12-05 03:20:39.223809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.416 [2024-12-05 03:20:39.223819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.416 [2024-12-05 03:20:39.223885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.416 [2024-12-05 03:20:39.223896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:08.416 [2024-12-05 03:20:39.223906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.416 [2024-12-05 03:20:39.223915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.416 [2024-12-05 03:20:39.223984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.416 [2024-12-05 03:20:39.224001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:08.416 [2024-12-05 03:20:39.224011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.416 [2024-12-05 03:20:39.224020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.416 [2024-12-05 03:20:39.224037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.416 [2024-12-05 03:20:39.224046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:08.416 [2024-12-05 03:20:39.224055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.416 [2024-12-05 03:20:39.224062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.309279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.309355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:08.680 [2024-12-05 03:20:39.309370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.309379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.376838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.376897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:08.680 [2024-12-05 03:20:39.376910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.376919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.377016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:08.680 [2024-12-05 03:20:39.377029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.377037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.377121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:08.680 [2024-12-05 03:20:39.377130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.377139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.377231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:08.680 [2024-12-05 03:20:39.377240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.377252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.377288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:08.680 [2024-12-05 03:20:39.377298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.377306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.377366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:08.680 [2024-12-05 03:20:39.377375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.377386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:08.680 [2024-12-05 03:20:39.377444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:08.680 [2024-12-05 03:20:39.377453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:08.680 [2024-12-05 03:20:39.377461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:08.680 [2024-12-05 03:20:39.377594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.203 ms, result 0 00:36:09.626 00:36:09.626 00:36:09.626 03:20:40 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:12.176 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:12.176 Process with pid 84127 is not found 00:36:12.176 Remove shared memory files 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84127 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84127 ']' 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84127 00:36:12.176 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84127) - No such process 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84127 is not found' 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_band_md /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_l2p_l1 /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_l2p_l2 /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_l2p_l2_ctx /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_nvc_md /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_p2l_pool /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_sb /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_sb_shm /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_trim_bitmap /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_trim_log /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_trim_md /dev/hugepages/ftl_71dee4a2-a84c-41e2-9929-a20823ca6df5_vmap 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:12.176 03:20:42 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:12.176 ************************************ 00:36:12.176 END TEST ftl_restore_fast 00:36:12.176 ************************************ 00:36:12.176 00:36:12.176 real 4m40.154s 00:36:12.176 user 4m27.317s 00:36:12.176 sys 0m12.245s 00:36:12.177 03:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.177 03:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:12.177 Process with pid 75001 is not found 00:36:12.177 03:20:42 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:12.177 03:20:42 ftl -- ftl/ftl.sh@14 -- # killprocess 75001 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 75001 ']' 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@958 -- # kill -0 75001 00:36:12.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75001) - No such process 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75001 is not found' 00:36:12.177 03:20:42 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:12.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.177 03:20:42 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86963 00:36:12.177 03:20:42 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86963 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@835 -- # '[' -z 86963 ']' 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.177 03:20:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:12.177 03:20:42 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:12.177 [2024-12-05 03:20:42.762924] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:36:12.177 [2024-12-05 03:20:42.763294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86963 ] 00:36:12.177 [2024-12-05 03:20:42.929102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.437 [2024-12-05 03:20:43.049824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.009 03:20:43 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.009 03:20:43 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:13.009 03:20:43 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:13.271 nvme0n1 00:36:13.271 03:20:44 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:13.271 03:20:44 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:13.271 03:20:44 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:13.532 03:20:44 ftl -- ftl/common.sh@28 -- # stores=a25c2f25-be74-40cb-871a-7a6810986950 00:36:13.532 03:20:44 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:13.532 03:20:44 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a25c2f25-be74-40cb-871a-7a6810986950 00:36:13.793 03:20:44 ftl -- ftl/ftl.sh@23 -- # killprocess 86963 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 86963 ']' 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@958 -- # kill -0 86963 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@959 -- # uname 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86963 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:13.793 killing process with pid 86963 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86963' 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@973 -- # kill 86963 00:36:13.793 03:20:44 ftl -- common/autotest_common.sh@978 -- # wait 86963 00:36:15.196 03:20:45 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:15.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:15.457 Waiting for block devices as requested 00:36:15.457 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:15.457 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:15.718 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:15.718 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:21.011 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:21.011 03:20:51 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:21.011 Remove shared memory files 00:36:21.011 03:20:51 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:21.011 03:20:51 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:21.011 03:20:51 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:21.011 03:20:51 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:21.011 03:20:51 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:21.011 03:20:51 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:21.011 ************************************ 00:36:21.011 END TEST ftl 00:36:21.011 ************************************ 00:36:21.011 00:36:21.011 real 18m6.771s 00:36:21.011 user 20m4.836s 00:36:21.011 sys 1m30.116s 00:36:21.011 03:20:51 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.011 03:20:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:21.011 03:20:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:21.011 03:20:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:21.011 03:20:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:21.011 03:20:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:21.011 03:20:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:21.011 03:20:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:21.011 03:20:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:21.011 03:20:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:21.011 03:20:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:21.011 03:20:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:21.011 03:20:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.011 03:20:51 -- common/autotest_common.sh@10 -- # set +x 00:36:21.011 03:20:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:21.011 03:20:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:21.011 03:20:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:21.011 03:20:51 -- common/autotest_common.sh@10 -- # set +x 00:36:22.400 INFO: APP EXITING 00:36:22.400 INFO: killing all VMs 00:36:22.400 INFO: killing vhost app 00:36:22.400 INFO: EXIT DONE 00:36:22.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:22.922 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:22.922 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:23.183 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:23.183 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:23.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:23.706 Cleaning 00:36:23.706 Removing: /var/run/dpdk/spdk0/config 00:36:23.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:23.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:23.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:23.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:23.967 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:23.967 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:23.967 Removing: /var/run/dpdk/spdk0 00:36:23.967 Removing: /var/run/dpdk/spdk_pid56938 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57134 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57352 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57445 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57485 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57607 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57625 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57813 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57901 00:36:23.967 Removing: /var/run/dpdk/spdk_pid57990 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58096 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58187 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58227 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58258 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58334 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58407 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58837 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58896 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58948 00:36:23.967 Removing: /var/run/dpdk/spdk_pid58964 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59055 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59071 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59173 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59178 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59239 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59249 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59302 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59320 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59479 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59511 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59595 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59772 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59851 00:36:23.967 Removing: /var/run/dpdk/spdk_pid59887 00:36:23.967 Removing: /var/run/dpdk/spdk_pid60329 00:36:23.967 Removing: /var/run/dpdk/spdk_pid60427 00:36:23.967 Removing: /var/run/dpdk/spdk_pid60539 00:36:23.967 Removing: /var/run/dpdk/spdk_pid60592 00:36:23.967 Removing: /var/run/dpdk/spdk_pid60612 00:36:23.967 Removing: /var/run/dpdk/spdk_pid60696 00:36:23.967 Removing: /var/run/dpdk/spdk_pid61324 00:36:23.967 Removing: /var/run/dpdk/spdk_pid61361 00:36:23.967 Removing: /var/run/dpdk/spdk_pid61834 00:36:23.967 Removing: /var/run/dpdk/spdk_pid61932 00:36:23.967 Removing: /var/run/dpdk/spdk_pid62047 00:36:23.967 Removing: /var/run/dpdk/spdk_pid62100 00:36:23.967 Removing: /var/run/dpdk/spdk_pid62120 00:36:23.967 Removing: /var/run/dpdk/spdk_pid62151 00:36:23.967 Removing: /var/run/dpdk/spdk_pid63988 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64123 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64127 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64139 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64186 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64190 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64202 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64247 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64251 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64263 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64308 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64312 00:36:23.967 Removing: /var/run/dpdk/spdk_pid64324 00:36:23.967 Removing: /var/run/dpdk/spdk_pid65710 00:36:23.967 Removing: /var/run/dpdk/spdk_pid65818 00:36:23.967 Removing: /var/run/dpdk/spdk_pid67229 00:36:23.967 Removing: /var/run/dpdk/spdk_pid68958 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69027 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69103 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69213 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69304 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69400 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69474 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69554 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69659 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69751 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69852 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69927 00:36:23.967 Removing: /var/run/dpdk/spdk_pid69998 00:36:23.967 Removing: /var/run/dpdk/spdk_pid70106 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70198 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70294 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70368 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70443 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70553 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70639 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70735 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70809 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70889 00:36:23.968 Removing: /var/run/dpdk/spdk_pid70963 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71037 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71147 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71234 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71331 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71405 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71478 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71548 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71628 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71731 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71822 00:36:23.968 Removing: /var/run/dpdk/spdk_pid71971 00:36:24.230 Removing: /var/run/dpdk/spdk_pid72254 00:36:24.230 Removing: /var/run/dpdk/spdk_pid72292 00:36:24.230 Removing: /var/run/dpdk/spdk_pid72742 00:36:24.230 Removing: /var/run/dpdk/spdk_pid72924 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73026 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73144 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73194 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73214 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73513 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73572 00:36:24.230 Removing: /var/run/dpdk/spdk_pid73647 00:36:24.230 Removing: /var/run/dpdk/spdk_pid74048 00:36:24.230 Removing: /var/run/dpdk/spdk_pid74191 00:36:24.230 Removing: /var/run/dpdk/spdk_pid75001 00:36:24.230 Removing: /var/run/dpdk/spdk_pid75128 00:36:24.230 Removing: /var/run/dpdk/spdk_pid75301 00:36:24.230 Removing: /var/run/dpdk/spdk_pid75404 00:36:24.230 Removing: /var/run/dpdk/spdk_pid75702 00:36:24.230 Removing: /var/run/dpdk/spdk_pid75961 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76315 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76498 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76668 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76715 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76908 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76933 00:36:24.230 Removing: /var/run/dpdk/spdk_pid76980 00:36:24.230 Removing: /var/run/dpdk/spdk_pid77230 00:36:24.230 Removing: /var/run/dpdk/spdk_pid77460 00:36:24.230 Removing: /var/run/dpdk/spdk_pid78193 00:36:24.230 Removing: /var/run/dpdk/spdk_pid79076 00:36:24.230 Removing: /var/run/dpdk/spdk_pid79677 00:36:24.230 Removing: /var/run/dpdk/spdk_pid80475 00:36:24.230 Removing: /var/run/dpdk/spdk_pid80629 00:36:24.230 Removing: /var/run/dpdk/spdk_pid80715 00:36:24.230 Removing: /var/run/dpdk/spdk_pid81238 00:36:24.230 Removing: /var/run/dpdk/spdk_pid81295 00:36:24.230 Removing: /var/run/dpdk/spdk_pid81967 00:36:24.230 Removing: /var/run/dpdk/spdk_pid82370 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83117 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83235 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83282 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83346 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83398 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83457 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83644 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83724 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83791 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83890 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83922 00:36:24.230 Removing: /var/run/dpdk/spdk_pid83983 00:36:24.230 Removing: /var/run/dpdk/spdk_pid84127 00:36:24.230 Removing: /var/run/dpdk/spdk_pid84375 00:36:24.230 Removing: /var/run/dpdk/spdk_pid84961 00:36:24.230 Removing: /var/run/dpdk/spdk_pid85654 00:36:24.230 Removing: /var/run/dpdk/spdk_pid86297 00:36:24.230 Removing: /var/run/dpdk/spdk_pid86963 00:36:24.230 Clean 00:36:24.230 03:20:55 -- common/autotest_common.sh@1453 -- # return 0 00:36:24.230 03:20:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:24.230 03:20:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.230 03:20:55 -- common/autotest_common.sh@10 -- # set +x 00:36:24.230 03:20:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:24.230 03:20:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.230 03:20:55 -- common/autotest_common.sh@10 -- # set +x 00:36:24.490 03:20:55 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:24.490 03:20:55 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:24.490 03:20:55 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:24.491 03:20:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:24.491 03:20:55 -- spdk/autotest.sh@398 -- # hostname 00:36:24.491 03:20:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:24.491 geninfo: WARNING: invalid characters removed from testname! 00:36:51.104 03:21:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:53.653 03:21:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:56.202 03:21:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:59.511 03:21:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:02.060 03:21:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:04.641 03:21:35 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:07.195 03:21:37 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:07.195 03:21:37 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:07.195 03:21:37 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:07.195 03:21:37 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:07.195 03:21:37 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:07.195 03:21:37 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:07.195 + [[ -n 5030 ]] 00:37:07.195 + sudo kill 5030 00:37:07.205 [Pipeline] } 00:37:07.221 [Pipeline] // timeout 00:37:07.226 [Pipeline] } 00:37:07.242 [Pipeline] // stage 00:37:07.248 [Pipeline] } 00:37:07.262 [Pipeline] // catchError 00:37:07.272 [Pipeline] stage 00:37:07.275 [Pipeline] { (Stop VM) 00:37:07.288 [Pipeline] sh 00:37:07.573 + vagrant halt 00:37:10.121 ==> default: Halting domain... 00:37:15.431 [Pipeline] sh 00:37:15.719 + vagrant destroy -f 00:37:18.263 ==> default: Removing domain... 00:37:18.540 [Pipeline] sh 00:37:18.825 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:18.837 [Pipeline] } 00:37:18.852 [Pipeline] // stage 00:37:18.858 [Pipeline] } 00:37:18.872 [Pipeline] // dir 00:37:18.878 [Pipeline] } 00:37:18.894 [Pipeline] // wrap 00:37:18.901 [Pipeline] } 00:37:18.914 [Pipeline] // catchError 00:37:18.925 [Pipeline] stage 00:37:18.927 [Pipeline] { (Epilogue) 00:37:18.941 [Pipeline] sh 00:37:19.228 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:24.526 [Pipeline] catchError 00:37:24.528 [Pipeline] { 00:37:24.541 [Pipeline] sh 00:37:24.826 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:24.826 Artifacts sizes are good 00:37:24.837 [Pipeline] } 00:37:24.855 [Pipeline] // catchError 00:37:24.870 [Pipeline] archiveArtifacts 00:37:24.879 Archiving artifacts 00:37:25.027 [Pipeline] cleanWs 00:37:25.059 [WS-CLEANUP] Deleting project workspace... 00:37:25.059 [WS-CLEANUP] Deferred wipeout is used... 00:37:25.066 [WS-CLEANUP] done 00:37:25.068 [Pipeline] } 00:37:25.084 [Pipeline] // stage 00:37:25.089 [Pipeline] } 00:37:25.104 [Pipeline] // node 00:37:25.110 [Pipeline] End of Pipeline 00:37:25.148 Finished: SUCCESS